Tech News

Quantum computer cracks bug problem

Error correction makes it possible for the first time to reduce the error rate despite scaling

Fewer errors despite scaling: Google researchers have developed a method for error correction in quantum computers that crosses an important threshold. Their system catches more errors than scaling up with the increasing number of qubits, the team reports in Nature. This is made possible by combining multiple data qubits into logical units and inserting additional measurement qubits that indicate errors.

Quantum computers are considered to be the computers of the future. Companies like IBM , Google and Chinese researchers are already competing for the largest and most powerful qubit computer. The problem, however, is that quantum bits are extremely error-prone , even small disturbances can tear them out of their entanglement and superimposition – and the error rate increases with the number of qubits.

Without efficient error correction systems, quantum computers are therefore not scalable to the required performance. Accordingly, scientists are working intensively on strategies to measure and contain qubit errors. So far, however, these correction systems have not been able to eliminate more errors than are added by the additional qubits. “The physical errors have always won,” explain Hartmut Neven and Julian Kelly from Google Quantum AI.

Fewer errors despite scaling

But that has now changed: “For the first time, our researchers have experimentally demonstrated that errors can be reduced as the number of qubits increases,” says Google CEO Sundar Pichai. The prototype developed by Google Quantum AI showed an error rate of 3.038 percent per calculation cycle for logic circuits made up of 17 physical qubits, compared to 2.914 percent for a circuit made up of 49 qubits.

“While that doesn’t seem like much, it’s the first time this experimental milestone in scaling logic qubits has been reached,” the researchers explain. For the first time, the threshold has been exceeded from which quantum error correction enables the performance of quantum computers to increase as the number of quantum bits increases. “This paves the way to the logical error rates required for quantum computing.”

Logical qubits make quantum computation more robust

This advance was made possible by error correction based on two strategies. The first basic principle is computing with logical qubits, the so-called surface code. Several physical quantum bits – in the case of the Google quantum computer they consist of quasiparticles in a superconducting material – are combined to form a computing unit. If one of the qubits tips over due to interference, enough other qubits remain to get the overall result of this computing unit.

Neven and Kelly explain this using a simple analogy: “Bob wants to send Alice a ‘1’ as a bit over a noisy channel. To avoid losing the information, it sends three bits instead: 111. If one of them flips over, Alice can still read the majority ratios of the received bits and thus get the information.” The more qubits you combine to form a logical qubit, the greater the the system is less sensitive to errors.

For their experiment, the scientists used a Sycamore quantum computer with 72 superconducting qubits, in which they ran logical computing units made up of 17 and 49 physical qubits.

Measurement qubits detect errors

The second strategy of error correction circumvents a fundamental problem of quantum physical operations: the moment you read the state of a qubit, you destroy its coherence and the calculation stops. To monitor the occurrence of errors without disrupting the calculation, the Google researchers supplemented their data qubits with special measurement qubits. These lie between the computing bits of the quantum computer and “eavesdrop” on their status.

The trick: “These measurements tell us whether the qubits of a logical unit still show the same thing or whether they differ due to an error,” explain Neven and Kelly. “In this way, they indicate errors without having to read out the individual data qubits.” The measurement system detects bit and phase errors in the qubits and makes it possible to compensate for these errors by reading out the majority state of the data qubits.

This is how quantum error correction works.© Google Quantum AI

“New Era of Quantum Error Correction”

By combining these systems, the researchers managed, for the first time, to achieve a lower error rate in a larger logical qubit unit of 49 qubits than in a smaller one with 17 qubits. The difference in the error rate, while minimal, was significant at more than five standard deviations. “This result shows that we are entering a new era of practical quantum error correction,” say Neven and Kelly. However, they also admit that this is just the beginning and that some improvements are still needed.

Other quantum researchers take a similar view: “This work demonstrates that the greater complexity of a system with more qubits can be brought under control to such an extent that a larger error correction code actually protects the information better than a smaller code,” comments Martin Ringbauer from the University of Innsbruck . “This is an important first step that shows that while the underlying sources of error still need to be significantly suppressed, the effort is worth it.”

Development is just beginning

According to Neven and Kelly, the self-imposed goal of Google Quantum AI is to reduce the error rate from currently one hundredth to ten-thousandth to just one millionth per cycle in the future. Above all, the stability of the superconducting qubits should be further improved. In parallel to the surface code, the researchers are also working on another method of error correction, the repetition code , which was presented in 2021 . Data and measurement qubits alternate in a chain. (Nature, 2023; doi: 10.1038/s41586-022-05434-1 )

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button