Companies are rushing to build quantum computers. How do you tell who's winning? One of the most important metrics is called fidelity.
Quantum computers need to be good at three things:
- Single-qubit gates: rotating one qubit.
- Two-qubit gates: entangling two qubits together.
- Measurements: reading out the answer.
Each gets its own fidelity. We'll cover all three.
Single-qubit fidelity
Single-qubit gates are rotations of a quantum state. The simplest way to explain single-qubit fidelity:
If a quantum computer tries to do a 180° rotation, how close does it get?
On the Bloch sphere you can picture this directly. The qubit starts at the north pole. A perfect 180° rotation lands it exactly at the south pole. A slightly wrong rotation lands it a hair short.
An ion-trap example
Let's get specific. In an ion-trap quantum computer, gates are performed by shooting a laser at an electron qubit for a short period of time, called a pulse.
When the laser turns on, the quantum state starts rotating. When the laser turns off, the rotation stops.
Three factors decide where the state ends up:
- The power of the laser determines how fast it rotates.
- The length of the pulse determines how far it rotates.
- The frequency of the laser determines the axis of rotation. The closer the frequency is to the qubit's “transition frequency”, the closer the axis is to the x-axis.
To do a perfect 180° rotation around the x-axis (an X gate), all three have to be near-perfect at once. In real hardware, none of them are.
How it's actually measured
Two extra details on how fidelity is measured in practice.
The readout. If we measure the rotated state in the up/down basis, a perfect 180° rotation means we should get down every time. If the rotation isn't perfect, we sometimes get up. So we count: out of 10,000 shots, what percentage came out down? That percentage is our fidelity.
All axes. Companies don't just test X. Some hardware is good at rotating around one axis but worse at others. Production benchmarks average fidelity over a wide set of random rotations (a technique called randomized benchmarking) to get a single number that represents the gate's overall quality.
Two-qubit fidelity
Two-qubit fidelity is the single most important metric in all of quantum computing.
It's measured the same way as single-qubit fidelity, except now we're testing gates that produce entanglement. For example: starting from |00⟩, how perfectly can a quantum computer produce the Bell state (|00⟩ + |11⟩)/√2? If we ever read out |01⟩ or |10⟩, the fidelity is low.
|+⟩|0⟩ produces a clean Bell state: the red and blue worlds split cleanly on both spheres. An imperfect one leaves residual overlap, which is exactly what shows up as 01 or 10 outcomes you weren't supposed to get.The reason two-qubit fidelity matters so much: two-qubit gates are historically terrible. Modern quantum computers sit around 99.9%, and the very best demonstrations have only just crossed 99.99% in 2025. Even at 99.9%, that's a ~0.1% chance of error per gate. Run a circuit with a thousand of them and the errors compound into nonsense.
The big payoff: error correction wires many imperfect physical qubits together into a single robust logical qubit. Past a threshold of roughly 99% physical fidelity, errors stop compounding; they actively get suppressed. Google demonstrated this in 2024. The catch is that error correction only becomes practical (with a reasonable qubit-count overhead) once you're comfortably into the 99.9% – 99.99% range, which is where the leading hardware is right now.
That's why companies are racing for those last few decimal places. Oxford Ionics held the previous record at 99.97% in 2024 and was acquired by IonQ in 2025; in late 2025, the combined team became the first to cross 99.99% two-qubit fidelity.
Measurement fidelity
Even after you've rotated your qubits perfectly and entangled them flawlessly, you still have to read out the outcome.
Measurement fidelity is the probability that the detector reports the right outcome. Sometimes a qubit truly collapses to |1⟩ but the detector reports |0⟩.
It's also called readout fidelity, or formally SPAM (state preparation and measurement). The two are usually lumped together because both contribute to errors at the boundary between quantum and classical.
|+⟩ and measured. The perfect detector always reports the true outcome. The noisy detector flips the answer one time in four: same physical collapse, wrong readout.Modern superconducting detectors typically hit 98–99%; ion-trap and neutral-atom readouts can clear 99.9%.
Today's leaderboard
The top operational quantum computers by published two-qubit fidelity, pulled live from our quantum computer registry:
| System | Vendor | 2-qubit | 1-qubit |
|---|---|---|---|
| Helios | Quantinuum | 99.92% | 100.00% |
| Tempo | IonQ | 99.90% | 99.99% |
| 11-qubit atomic-precision processor | Silicon Quantum Computing (SQC) | 99.90% | n/a |
| Willow | Google Quantum AI | 99.88% | 99.97% |
| Sqale | Infleqtion | 99.73% | n/a |
| Orion Gamma | Pasqal | 99.70% | 99.90% |
Numbers are vendor-published. Where third-party benchmarks disagree, the registry notes the discrepancy. See the full registry for sources and per-system details.
What's next
Crossing the error-correction threshold is the moment quantum computers stop being demos and start being machines. A guide on error correction is coming soon.
