After all, ordinary computers can outperform Google’s quantum computer | science

If the age of quantum computing dawned 3 years ago, its rising sun might have fallen behind a cloud. In 2019, Google researchers claimed to have passed a milestone known as quantum supremacy when their Sycamore quantum computer performed in 200 seconds an abstruse calculation that they said would tie up a supercomputer for 10,000 years. Now, scientists in China have done the calculation in a few hours using ordinary processors. They say a supercomputer could beat Sycamore straight up.

“I think they’re right that if they had access to a big enough supercomputer, they could have simulated the … task in seconds,” says Scott Aaronson, a computer scientist at the University of Texas at Austin. The advance takes some shine off Google’s claim, says Greg Kuperberg, a mathematician at the University of California, Davis. “Getting within 300 feet of the top is less exciting than getting to the top.”

Still, the promise of quantum computing remains unfulfilled, Kuperberg and others say. And Sergio Boixo, principal scientist at Google Quantum AI, said in an email that the Google team knew its lead might not last long. “In our 2019 paper, we said that classical algorithms would improve,” he said. But, “we don’t think this classical approach can keep up with quantum circuits in 2022 and beyond.”

The “problem” solved Sycamore was designed to be hard for a conventional computer, but as easy as possible for a quantum computer, which manipulates qubits that can be set to 0, 1 or, thanks to quantum mechanics, anything combination of 0 and 1. at the same time. Together, Sycamore’s 53 qubits, tiny resonant electrical circuits made of superconducting metal, can encode any number from 0 to 2.53 (approximately 9 quadrillion), or even all at once.

Starting with all qubits set to 0, the Google researchers applied to single qubits and paired a random but fixed set of logic operations, or gates, for 20 cycles, then read the qubits. Broadly speaking, quantum waves representing all possible outputs traveled between the qubits and the gates created interference that reinforced some outputs and canceled others. Therefore, some should have appeared more likely than others. Over millions of trials, a spiked exit pattern emerged.

The Google researchers argued that simulating these interference effects would overwhelm even Summit, a supercomputer at Oak Ridge National Laboratory, which has 9,216 central processing units and 27,648 faster graphics processing units (GPUs). Researchers at IBM, which developed Summit, quickly responded that if they exploited every element of the computer’s available hard drive, it could handle the calculation in a few days. Now, Pan Zhang, a statistical physicist at the Institute of Theoretical Physics of the Chinese Academy of Sciences, and his colleagues have shown how to beat Sycamore in a paper in press at Physical review letters.

After others, Zhang and his colleagues reformulated the problem as a 3D mathematical matrix called a tensor network. It consisted of 20 layers, one for each gate cycle, and each layer consisted of 53 points, one for each qubit. Lines connected the dots to represent gates, with each gate encoded in a tensor—a 2D or 4D grid of complex numbers. Running the simulation basically boiled down to multiplying all the tensors. “The advantage of the tensor network method is that we can use many GPUs to do the calculations in parallel,” says Zhang.

Zhang and his colleagues also relied on a key insight: Sycamore’s calculation was far from exact, so theirs didn’t have to be either. Sycamore calculated the distribution of the outputs with an estimated fidelity of 0.2%, enough to distinguish the fingerprint-like spike from circuit noise. So Zhang’s team traded accuracy for speed by cutting some lines in their network and removing the corresponding gates. Losing just eight lines made the calculation 256 times faster while maintaining 0.37% fidelity.

The researchers calculated the output pattern of 1 million of the 9 quadrillion possible number strings, based on an innovation of their own to obtain a truly random representative set. The computation took 15 hours on 512 GPUs and gave a revealing output. “It’s fair to say that the Google experiment was simulated on a conventional computer,” says Dominik Hangleiter, a quantum computer scientist at the University of Maryland, College Park. On a supercomputer, the calculation would take a few dozen seconds, Zhang says, 10 billion times faster than the Google team estimated.

The researchers say the breakthrough highlights the pitfalls of competing with a quantum computer against a conventional one. “There is an urgent need for better quantum supremacy experiments,” says Aaronson. Zhang suggests a more practical approach: “We need to find some real-world applications to demonstrate the quantum advantage.”

Still, Google’s demo wasn’t all hype, researchers say. Sycamore required far fewer operations and less power than a supercomputer, Zhang notes. And if Sycamore had a slightly higher fidelity, he says, his team’s simulation might not have followed. As Hangleiter says, “The Google experiment did what it was supposed to do, start this race.”

Leave a Comment

Your email address will not be published.