In the last few episodes (check here), we slowly built up a comforting picture and learnt how qubits are prone to errors and how we can correct them etc. The conclusion is that:
Yes, qubits are fragile. Yes, they suffer from errors. Yes, we have clever ways to detect and correct those errors. It almost feels like we have solved the problem! But there is a catch.
To understand this, let us go back to something simple.
A Simple Shield: The Repetition Code
Imagine you are worried that a single bit might flip during transmission. So instead of sending it once, you send it three times.
0 → 000
1 → 111
If you send 000 and one bit flips, you might receive:
010
001
100
But you look at the majority. Two zeros, one one. So clearly the original must have been 000.
But now imagine something slightly worse.
You send 000, and two bits flip. The receiver gets:
011
Now the majority is ones. The receiver confidently concludes that you must have sent 111, and that the first bit flipped.
But you never sent 1. You sent 0.
In this case, the receiver misinterprets the information and is not even aware that a mistake has occurred. This is known as a logical error: an error that cannot be corrected because its presence is undetected. It is a simple example of how an error correction scheme can fail. For our repetition code, the limit is one error. If more than one error occurs, the scheme fails.
How Much Trouble You Can Survive
With three bits, you can correct only one error.
If you use five bits instead:
0 → 00000 , 1 → 11111
Now you can survive up to two flipped bits.
More bits means you can tolerate more errors. We say the code with five bits has a larger distance than the one with three.
The larger the distance:
The more physical bits or qubits you use
The more errors you can correct
It sounds like a simple recipe. Add more qubits. Increase distance. Win the game.
But that is not the full story. The problem is that when you add more qubits, you might have even more errors!
The Majority Must Be Honest
If you look closely, this whole strategy depends on something subtle. Most of the bits must be correct. Error correction only works if errors are the minority.
If the majority of bits are wrong, the system confidently corrects you in the wrong direction.
This is a bit like democracy. It works only if each voter is reasonably reliable. If most voters tend to make poor choices, then adding more such poor voters will not fix the problem. Instead, the majority will confidently choose the wrong option. This is exactly the case with error correction as well.
There is a hidden requirement: each individual unit must be good enough.
Below some minimum quality, adding more of them only multiplies disaster.
Therefore, when evaluating how good a quantum computer is, you should not look only at the number of qubits. The quality of the qubits matters just as much. A 100 reliable qubits are far more useful than one thousand very noisy ones.
So How Good Is Good Enough?
This is one of the most important questions in quantum computing. Unfortunately, there is no single magic number. The answer depends on several factors.
1. The Noise Model
Not all errors are equal.
Are errors random and independent? Are they correlated? Does the environment cause certain types of flips more often?
The required threshold depends heavily on how cruel the environment is. The noisier and more structured the noise, the stricter your requirements.
2. Time
How long must the qubit survive?
If you only need it for a very short time, you may manage. If you need it to store information for long durations, errors accumulate. This is why scientists obsess over optimizing circuits. Fewer steps means less exposure to noise. Less exposure means fewer opportunities for disaster.
3. Gates and Operations
Are you just storing a state quietly? Or are you actively manipulating it with many gates?
Each operation introduces more chances for error. The more you do, the cleaner each operation must be.
Storage, computation, communication, all have different demands.
So, taking all these effects into account, one must determine how reliable each individual qubit needs to be for error correction to actually improve the computation. This critical value is called the error threshold.
How Do We Find the Threshold? (for advanced readers)
For advanced readers, this is where simulations come in.
Researchers often simulate large error correcting codes under realistic noise assumptions. One widely used open source tool is STIM (check here), which allows fast simulation of large quantum codes.
What do these simulations produce?
Threshold plots. An example of a threshold plot is shown below:

On such a plot:
The horizontal axis shows the physical error rate of each qubit
The vertical axis shows the logical failure rate after error correction
You see multiple curves. Each curve corresponds to a different code distance, meaning a different number of physical qubits.
In the “good” region, where physical error rates are low, increasing the distance pushes the logical error rate down. The curves for larger distance sit lower.
In the “bad” region, where physical error rates are too high, increasing the distance makes the logical error rate worse. The curves for larger distance sit higher.
There is a crossing point.
That crossing is the threshold.
On one side of it, scaling up saves you.
On the other side, scaling up sinks you.
Conclusion
So how good must each qubit be? Good enough to sit above the threshold!
That answer sounds simple, but reaching it has taken decades of theory, experiments, materials science, fabrication advances, and clever engineering.
The threshold is not just a number. It is the line between a machine that scales and a machine that collapses under its own noise.
And that is why the real race in quantum computing is not just about more qubits. It is about better qubits!


