Applying Particle Physics Methods to Quantum Computing at Berkeley Lab
(Phys.org) A team of physicists and computer scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has successfully adapted and applied a common error-reduction technique to the field of quantum computing.
Ben Nachman, a Berkeley Lab physicist who is involved with particle physics experiments at CERN as a member of Berkeley Lab’s ATLAS group, saw the quantum-computing connection while working on a particle physics calculation with Christian Bauer, a Berkeley Lab theoretical physicist who is a co-author of the study. ATLAS is one of the four giant particle detectors at CERN’s Large Hadron Collider, the largest and most powerful particle collider in the world.
“At ATLAS, we often have to ‘unfold,’ or correct for detector effects,” said Nachman, the study’s lead author. “People have been developing this technique for years.”
“We realized that current quantum computers are very noisy, too,” Nachman said, so finding a way to reduce this noise and minimize errors—error mitigation—is a key to advancing quantum computing. “One kind of error is related to the actual operations you do, and one relates to reading out the state of the quantum computer,” he noted—that first kind is known as a gate error, and the latter is called a readout error.
The latest study focuses on a technique to reduce readout errors, called “iterative Bayesian unfolding” (IBU), which is familiar to the high-energy physics community. The study compares the effectiveness of this approach to other error-correction and mitigation techniques.