(Phys.org) Convolutional neural networks running on quantum computers have generated significant buzz for their potential to analyze quantum data better than classical computers can. While a fundamental solvability problem known as “barren plateaus” has limited the application of these neural networks for large data sets, new research overcomes that Achilles heel with a rigorous proof that guarantees scalability.
Overcoming barren plateaus is key to extracting the full potential of quantum computers in AI applications and demonstrating their superiority over classical computers.
Until now, Cerezo said, researchers in quantum machine learning analyzed how to mitigate the effects of barren plateaus, but they lacked a theoretical basis for avoiding it altogether. The Los Alamos work shows how some quantum neural networks are, in fact, immune to barren plateaus.
The way you construct a quantum neural network can lead to a barren plateau—or not,” said Marco Cerezo, co-author of the paper titled “Absence of Barren Plateaus in Quantum Convolutional Neural Networks,” published today by a Los Alamos National Laboratory team in Physical Review X. Cerezo is a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos. “We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters.”
The Los Alamos team developed a novel graphical approach for analyzing the scaling within a quantum neural network and proving its trainability.