If errors go unchecked, random circuit sampling, a popular technique for demonstrating the power of quantum computers, does not scale up.
Researchers use random circuit sampling to randomly manipulate quantum bits. A new paper investigates how quantum computer errors can multiply to thwart these efforts.
In what situations do quantum computers outperform their classical counterparts? That's a difficult question to answer, in part because today's quantum computers are finicky, with errors that can accumulate and sabotage their calculations.
Of course, they've already done it by one standard. In 2019, Google physicists announced that they had achieved quantum supremacy with a 53-qubit machine, a symbolic milestone indicating that a quantum computer can do something that no practical classical algorithm can. Physicists at China's University of Science and Technology soon followed suit with similar demonstrations.
Rather than focusing on a single experimental result for a specific machine, computer scientists want to know whether classical algorithms will be able to keep up as quantum computers grow in size. "The hope is that the quantum side will eventually just completely pull away until there is no competition," said Scott Aaronson, a computer scientist at the University of Texas at Austin.
Because of those pesky errors, answering that broad question remains difficult. (Future quantum machines will compensate for flaws using a technique known as quantum error correction, but this capability is still several years away.) Is it possible to achieve the desired runaway quantum advantage even if errors go uncorrected?
The majority of researchers suspected no, but they couldn't prove it in every case. In a paper posted to the preprint server arxiv.org, a group of computer scientists has taken a significant step toward comprehensive proof that error correction is required for a lasting quantum advantage in random circuit sampling — the bespoke problem that Google used to demonstrate quantum supremacy. They achieved this by creating a classical algorithm that can simulate random circuit sampling experiments in the presence of errors.
"It's a beautiful theoretical result," Aaronson said, emphasizing that the new algorithm isn't practical for simulating real-world experiments like Google's.
Researchers begin random circuit sampling experiments with an array of qubits, or quantum bits. They then use operations known as quantum gates to randomly manipulate these qubits. Some gates cause qubit pairs to become entangled, which means they share a quantum state and cannot be described independently. Repeated layers of gates cause the qubits to become more complicatedly entangled.
The researchers then take measurements on all of the qubits in the array to learn more about that quantum state. Their collective quantum state is reduced to a random sequence of ordinary bits — 0s and 1s. As the number of qubits in the array grows, so does the number of possible outcomes: As in Google's experiment, it's nearly 10 quadrillions with 53 qubits. Furthermore, not all strings have the same probability. Sampling from a random circuit entails repeating such measurements to build a picture of the probability distribution underlying the results.
The quantum advantage question is simple: Is it difficult to simulate that probability distribution using a classical algorithm that does not use entanglement?
In 2019, researchers demonstrated that the answer for error-free quantum circuits is yes: it is difficult to simulate a random circuit sampling experiment classically when there are no errors. The researchers conducted their work within the framework of computational complexity theory, which classifies the difficulty of various problems. Researchers in this field do not believe that the number of qubits, such as 53, is a fixed number. "Think of it as n, which is a growing number," physicist Aram Harrow of the Massachusetts Institute of Technology said. "So, should we be doing things where the effort is exponential in n or polynomial in n?"When n is large enough, an algorithm that is exponential in n lags far behind any algorithm that is polynomial in n. When theorists speak of a problem that is difficult for classical computers but easy for quantum computers, they are referring to the following distinction: the best classical algorithm takes exponential time to solve the problem, whereas a quantum computer can solve it in polynomial time.
Nonetheless, the effects of errors caused by imperfect gates were ignored in the 2019 paper. This left open the possibility of a quantum advantage for random circuit sampling in the absence of error correction.
If you imagine continually increasing the number of qubits as complexity theorists do, and you also want to account for errors, you need to decide whether you're also going to keep adding more layers of gates — increasing the circuit depth, as researchers say. As the number of qubits increases, suppose you keep the circuit depth constant at, say, three relatively shallow layers. There will be little entanglement, and the output will still be susceptible to classical simulation. If, on the other hand, the circuit depth is increased to keep up with the increasing number of qubits, the cumulative effects of gate errors will wash out the entanglement, and the output will become easy to simulate classically once more.
However, there is a Goldilocks zone in between. Prior to the new paper, there was still a chance that quantum advantage could persist here, even as the number of qubits increased. In this intermediate-depth case, you gradually increase the circuit depth as the number of qubits increases: even though the output will gradually degrade due to errors, it may still be difficult to simulate classically at each step.
The new paper closes this loophole. The authors created a classical algorithm for simulating random circuit sampling and demonstrated that its runtime is a polynomial function of the time it takes to run the corresponding quantum experiment. The outcome establishes a strong theoretical link between the speed of classical and quantum approaches to random circuit sampling.
The new algorithm is effective for a wide range of intermediate-depth circuits, but its underlying assumptions fail for some shallower ones, leaving a small gap where efficient classical simulation methods are unknown. However, few researchers believe that random circuit sampling will be difficult to simulate classically in the remaining narrow window. "I give it pretty low odds," said Bill Fefferman, a computer scientist at the University of Chicago and one of the paper's authors in 2019.
According to the rigorous standards of computational complexity theory, random circuit sampling will not yield a quantum advantage. At the same time, it demonstrates that polynomial algorithms, which complexity theorists universally refer to as efficient, aren't always fast in practice. The new classical algorithm becomes progressively slower as the error rate decreases, and it is far too slow to be practical at the low error rates achieved in quantum supremacy experiments. With no errors, it completely fails, so this result does not contradict anything researchers previously knew about how difficult it is to classically simulate random circuit sampling in the ideal, error-free case.
Sergio Boixo, the Google physicist in charge of quantum supremacy research, describes the paper as "more of a nice confirmation of random circuit sampling than anything else."
All researchers agree on one point: the new algorithm emphasizes how critical quantum error correction will be to the long-term success of quantum computing. "At the end of the day, that's the solution," Fefferman said.
Scott Aaronson serves on the Advisory Board of Quanta Magazine.
Also Read: Wear OS users now have access to Google Maps: how it works
0 Comments :
Post a Comment