August 11, 2024 by Ingrid Fadelli , Phys.org
Collected at; https://phys.org/news/2024-08-unveils-limits-extent-quantum-errors.html
Quantum computers have the potential of outperforming conventional computers on some practically relevant information processing problems, possibly even in machine learning and optimization. Yet their large-scale deployment is not yet feasible, largely due to their sensitivity to noise, which causes them to make errors.
One technique designed to address these errors is known as quantum error correction and is designed to work “on-the-fly,” monitoring for errors and restoring computations when errors occur. Despite enormous progress in recent months along these lines, this strategy remains experimentally highly challenging and comes with substantial resource overheads.
An alternative approach, known as quantum error mitigation, works more indirectly: instead of correcting errors the moment they arise, the error-filled computation (or modified versions thereof) is run till completion. Only at the end, does one go back to infer what the correct result was. This method was proposed as a “stand-in” solution for tackling errors made by quantum computers before full error correction can be implemented.
Yet researchers at Massachusetts Institute of Technology, Ecole Normale Superieure in Lyon, University of Virginia and Freie Universität Berlin showed that quantum error mitigation techniques become highly inefficient as quantum computers get bigger and are scaled up.
This entails that error mitigation will be no long-term panacea for the perennial issue of noise in quantum computation. Their paper, published in Nature Physics, offers guidance on what schemes for mitigating the adverse impact of noise on quantum computations are bound to be ineffective.
“We were contemplating limitations to near-term quantum computing making use of noisy quantum gates,” Jens Eisert, co-author of the paper, told Phys.org.
“Our colleague Daniel Stilck França had just proven a result that amounted to compelling limitations of near-term quantum computing. He had shown that for depolarizing noise, in logarithmic depth, one would arrive at a quantum state that could be captured with efficient classical sampling techniques. We had just been thinking about quantum error mitigation, but then we thought: wait, what does all that mean for quantum error mitigation?”
The recent paper by Yihui Quek, Daniel Stilck França, Sumeet Khatri, Johannes Jakob Meyer and Jens Eisert builds on this research question, setting out to explore the precise limits of quantum error mitigation. Their findings unveil the extent to which quantum error mitigation can help to reduce the impact of noise on short-term quantum computing.
“Quantum error mitigation was meant to be a stand-in for quantum error correction as it requires less precise engineering to implement, and so there was the hope that it could be within reach, even for current experimental capabilities,” Yihui Quek, lead author of the paper, told Phys.org.
“But when we squinted at these relatively simpler mitigation schemes, we began to realize that maybe you can’t have your cake and eat it—yes, they require less qubits and control, but often that comes at the cost of having to run the entire system a worryingly large number of times.”
One example of a mitigation scheme that the team found had limitations is known as ‘zero-error extrapolation.” This scheme works by progressively increasing the amount of noise in a system and then converting the results of the noisier computation back to a zero-noise scenario.
“Essentially, to combat noise, you are supposed to increase the noise in your system,” Quek explained. “Even intuitively, it’s clear that this cannot be scalable.”
Quantum circuits (i.e., quantum processors) consist of layer upon layer of quantum gates, each of which is fed the computations of the previous layer and further advances them. If the gates are noisy, however, every layer in the circuit becomes a double-edged sword, as while it does advance a computation, the gate itself introduces additional errors.
“This sets up a terrible paradox: you need many layers of gates (hence a deep circuit) in order to do a nontrivial computation,” Quek said.
“However, a deeper circuit is also noisier—it is more likely to output nonsense. So, there is a race between the speed at which you can compute and the speed at which the errors in the computation accumulate.
“Our work shows that there are extremely wicked circuits for which the latter is much, much faster than originally thought; so much so that to mitigate these wicked circuits, you would need to run them an infeasible number of times. This holds no matter what specific algorithm you use for error mitigation.”
The recent study by Quek, Eisert and their colleagues suggests that quantum error mitigation is not as scalable as some predicted. In fact, the team found that as quantum circuits are scaled up, the effort or resources needed to run error mitigation greatly increase.
“As with all no-go theorems, we like to see them less like a show-stopper than an invitation,” Eisert said.
“Maybe working with more geometrically locally connected constituents, one arrives at much more optimistic settings, in which case maybe our bound is way too pessimistic. Common architectures often have such local interactions. Our study can also be seen as an invitation to think of more coherent schemes of quantum error mitigation.”
The findings gathered by this research team could serve as a guide for quantum physicists and engineers worldwide, inspiring them to devise alternative and more effective schemes for mitigating quantum errors. In addition, they could inspire further studies focusing on theoretical aspects of random quantum circuits.
“Previous scattered work on individual algorithms for quantum error mitigation had hinted that these schemes would not be scalable,” Quek said.
“We came up with a framework that captures a large swathe of these individual algorithms. This allowed us to argue that this inefficiency that others had seen is inherent to the idea of quantum error mitigation itself—and has nothing to do with the specific implementation.
“This was enabled by the mathematical machinery we developed, which are the strongest known results so far on how quickly circuits can lose their quantum information due to physical noise.”
In the future, the paper by Quek, Eisert and their colleagues could help researchers to rapidly identify types of quantum error mitigation schemes that will most likely be ineffective. The key conceptual insight of the team’s findings is to crystallize the intuition that long-range gates (i.e., gates with qubits that are separated by large distances) can be both advantageous and problematic, as they easily produce entanglement, advancing computation, while also spreading the noise in a system faster.
“This, of course, opens up the question of whether it is even possible to attain quantum advantage without using these ‘super-spreaders’ of both quantumness and its worst enemy (i.e., noise),” Quek added. “Notably, all of our results do not hold when there are fresh auxiliary qubits introduced in the middle of the computation, so some amount of that may be necessary.”
In their next studies, the researchers plan to shift the focus from the issues they identified to potential solutions for overcoming these issues. Some of their colleagues have already made some progress in this direction, using a combination of randomized benchmarking and quantum error mitigation techniques.
More information: Yihui Quek et al, Exponentially tighter bounds on limitations of quantum error mitigation, Nature Physics (2024). DOI: 10.1038/s41567-024-02536-7.
Journal information: Nature Physics
Leave a Reply