Quantum Particulars Guest Column: “Quantum researchers have a lot to learn from the mistakes of the artificial intelligence community”
“Quantum Particulars” is an editorial guest column featuring exclusive insights and interviews with quantum researchers, developers, and experts looking at key challenges and processes in this field. This article features the opinions of Joan Etude Arrow, the Founder and CEO of the Quantum Ethics Project, who discusses the function and failings of “hype” within the quantum industry.
Following the 1956 Dartmouth Summer Study Group that established the field of artificial intelligence, newly minted AI researchers proclaimed that computers would soon achieve human-level intelligence or greater. These claims were made when computers ran on vacuum tubes, took up a whole room, and lacked the internet’s bountiful training data essential to AI models today, such as ChatGPT. Even though none of the hardware necessary for sophisticated AI existed, the so-called golden years of AI lasted until 1974 and saw millions of dollars invested at MIT alone to fund research based on overhyped promises.
This story may sound familiar to anyone within spitting distance of quantum computing. Talk to any serious researcher, as I have over the past two years on my quest to understand quantum hype, and they will tell you the level of hype around quantum technologies is close to the top of their concerns. My colleagues worry that, like those researchers in the 50s, we are overselling the capabilities of quantum computers. Quantum computing hardware is still in its infancy, and like the vacuum tubes of the 1950s, our infant qubits are not strong enough to shoulder the promises we place upon them.
This is what I mean by hype, which I define as the difference between the promised capabilities of that technology and its real-world capabilities. AI researchers overpromised 50 years before the hardware would be able to deliver, and as a result, most lost faith in the field – plunging AI research into a Winter of minimal funding and fringe status for decades – the consequences of which was a snail’s pace of progress in the field.
Today, quantum researchers are flirting with the same disaster. If we do not get a handle on the rampant hype of our field, we run the risk of plunging quantum into a winter of its own. This would guarantee that the much-needed solutions quantum is capable of will not arrive for years or even decades as we struggle to advance quantum hardware on the fringes of technological development and without sufficient funding.
But this article is not a lecture about hype. As I have pointed out from my own experiences, there is broad agreement in the quantum community that hype is a problem, now we need to decide what to do about it. Complicating the issue is the fact that hype is not a universally bad thing. It can be a healthy mechanism for generating excitement, raising funding, and promoting one’s work.
How, then, can we balance our needs to raise funds and sell products with the imperative to avoid a quantum winter through clear and credible science?
I believe that quantifying this differential between promised capability and real-world capability is a good start. We need a metric of credibility attempts to qualify the following question: How far is your technology’s real-world capability from delivering on its promise?
In the case of quantum algorithms, quantum computational advantage is the overarching goal of the field. Producing a credibility metric for a quantum algorithm could look like estimating the number of qubits that you would likely need to achieve quantum advantage and then comparing that number with the largest physical system you have been able to implement your algorithm on successfully.
As a simple example: If your algorithm requires at least 100 qubits to perform in a regime that classical computers cannot simulate – thereby establishing the regime of quantum advantage – and your algorithm has completed on only 7 qubits with a pre-specified solution error, then your real capability versus promise ratio is 7/100 = 7%. The closer you get to 1, the more credible you become.
It is important to point out that this metric depends on a heuristic, the number of qubits needed to go beyond the quantum simulation capacity of classical computers. This number is not fixed, as ever more sophisticated methods for classical simulation of quantum systems are devised, this upper limit will rise. So long as the assumptions regarding heuristics are made clear, the credibility score can be an important way to clarify what would otherwise be a prohibitively technical conversation about the progress being made by quantum algorithm researchers.
A similar credibility metric may be produced in quantum sensing or quantum networking regimes. For quantum sensing, the overarching goal might be a quantum sensor, such as a satellite-free GPS, that is portable enough to be deployed in the field, for example, in someone’s hand or on a plane. Here, the promise is a certain threshold for portability, physical size, weight, and sensitivity in the field.
Clarifying these metrics would reduce hype and showcase progress toward useful quantum technology. It might make for a more sobering sales pitch, but it is essential to ensure that investors, potential customers, and the general public have an accurate understanding of where we are today and how far we have yet to go.
These metrics should be seen as a starting point for getting a handle on the problem of hype. Those of us in the quantum community should work to develop clear, easy-to-understand metrics that make sense for the goals of our specific subfields. In addition, these metrics do little if buried in your paper’s technical section. These metrics and the assumptions they depend on should be front and center in every paper abstract to ensure clear and credible scientific communication of our results going forward.
Whether we avoid a quantum winter is up to us. If the success of modern AI has taught us anything, it is that, when it arrives, quantum technology will be a force to be reckoned with. It’s up to us how soon that future is realized.
Joan Etude Arrow is the Founder and CEO of the Quantum Ethics Project. As a Quantum Society Fellow with the Center for Quantum Networks, Joan specializes in quantum machine learning with a particular focus on credible research practices that address issues of hype in the field. As Deputy Director of Education and Workforce Development at Q-SEnSE, Joan is also focused on making quantum technology more accessible, particularly to students from diverse backgrounds.