theory-of-computation – What are the main differences between a quantum computer and a conventional computer?


Quantum computing is said to revolutionize computing if successfully implemented. Why that? What are the main differences between a quantum computer and a conventional computer that would make the former much faster?


In a conventional computer, a given data (ie a "bit") is always in a single state – 0 or 1 . In a quantum computer, this same data (called a "qubit") can be in any overlap of these two states, until such time as it is measured (collapsing to a single value). This is under the laws of Quantum Mechanics, which include things like entanglement (which is described as one of the foundations of quantum computation, although I can't say exactly what its role is).

What makes a quantum computer faster than a conventional one is therefore its ability to progress from a probabilistic state to another probabilistic state without the need to enumerate all the possibilities involved. To understand this, it helps to know the BPP complexity class:

Bounded-error Probabilistic Polynomial time

A problem is said to be in BPP if there is an algorithm for it with the following properties:

  • A random component (ie the algorithm can use random data or make random decisions);
  • Polynomial execution time in relation to the input size, in the worst case;
  • The answer ("yes" or "no") has at least a 2/3 chance of being correct.
    • Strictly speaking, wrong answers are admitted, as long as their chance is lower than that of correct answers (ie the 2/3 value can be replaced by another).

Every problem in P is in BPP, however there are problems in BPP that it is not yet known if it is in P. The number of these problems has been decreasing (eg, until recently, there was no polynomial algorithm to say if a number is or not prime , just a probabilistic algorithm ), which leads to the conjecture that P = BPP , but there are still cases.

Note: In addition to the BPP class, there are other situations where a probabilistic algorithm is desirable. For example, one cannot guaranteed to solve an NP-Complete problem in polynomial time, but one could – in an exhaustive search – employ probabilities to reduce the search space. On a conventional computer, this is of no benefit however…

OK, and how does a probabilistic algorithm work, since there is a high possibility of error? Simply running it several times in order to reduce the margin of error. If in the first round there is a 1/3 chance of error, after the second there is (1/3)*(1/3) = 1/9 , after the third 1/27 and so on. So just take the results of several rounds and see which one occurs most often, and take that as the right result.


A single bit can only be in two states, 0 or 1 . Two bits can be in four, 00 , 01 , 10 or 11 , three bits in eight, and so on. This means that if you started from state 100 and applied some probabilistic operation, there is a 1 in 8 chance of reaching any of the following states. After several operations in sequence, the number of possibilities increases enormously. This means that it is necessary to run a very large number of rounds to reach a result with an acceptable level of confidence.

In the case of quantum computing, on the other hand, the superposition of states holds within itself the probabilities from which a certain state was reached. So, starting from state 100 you get to state ??? after the first operation, to the state ??? after the second, and so on, until all operations are performed. Only at the end, when the result is read (ie the qubits are measured) will a set of concrete values ​​- type 010 – be obtained.

To put it another way, if in the conventional computer you started from 100 and reached 101 , the next step will be based on the state 101 . If the first step was repeated 20 times and in 17 of them the 101 was reached, then the second step will be executed 17 times with the state 101 as base (and 3 times from another base). Let's say the second step produced 011 of 15 of these repeats.

In the quantum computer, the result of the first step will be ??? , with an 85% chance of being 101 , and the result of the second stage will be ??? , with at least a 75% chance of being 011 (85% * 88.2%). All this in a single execution! Whereas the conventional computer would have to run a high number of times to get the same result with a similar level of confidence (and more often to reduce the margin of error to an acceptable value – something that both conventional and quantum have to do anyway).

So, answering the question, what makes the quantum computer so much faster is the possibility that, in a single sequence of operations, it is possible to reach the same result that a conventional computer would achieve if it performed a much larger set of operations.

It is a case similar to that of graphics accelerator cards, where operations involving vectors, matrices, etc. are performed directly on the hardware circuitry, and the same operations to be performed on the CPU would require a much larger set of operations on simpler data types (ie numbers) involving loops. In this case the scale of optimization is much larger, but in principle everything a quantum computer does could be simulated in a conventional computer (the Quantum Turing Machine is still theoretically equivalent to the Turing Machine).

Scroll to Top