Nvidia has been gathering momentum in the quantum computing sector the last few years, releasing quantum resources and software and piling up partnerships, but a big batch of quantum announcements coming out of the Nvidia GTC event in San Jose, California, this week suggest the company’s quantum ambitions are ramping up.
Among those announcements is the launch of Nvidia Quantum Cloud, a cloud-based simulation platform that is based on the company’s open-source CUDA-Q quantum computing programming and integration platform, which is used by three-quarters of the companies deploying quantum processing units, or QPUs. As a microservice, it lets users build and test in the cloud new quantum algorithms and applications — including powerful simulators and tools for hybrid quantum-classical programming.
Tim Costa, director of HPC and quantum computing at Nvidia, said that while cloud services that give users direct access to QPUs exist, Nvidia Quantum Cloud will provide cloud access to Nvidia’s quantum tools and GPU resources to run simulation projects and other tasks.
“One of the challenges that we want to address for the quantum research community is improving access to quantum resources,” Costa said. “If you look at the ecosystem today… We estimate around 500,000 quantum developers are out there doing work, but there are only about 50 publicly available QPUs. Their uptime is around 10% to 20%. They have zero fault tolerant qubits… And if you start looking at CPUs as an alternative, what can take an hour on a GPU cluster will take about 7.5 years on a CPU.”
Nvidia Quantum Cloud, now available as an early-access release, allows developers to locally compile any CUDA-Q program, and then set the target in the compile line or in the configuration of your Python script to Nvidia Quantum Cloud… and then you run it from your laptop and it sends that off to Nvidia Quantum Cloud on Nvidia GPU resource,s and you get your result back in seconds instead of 20 minutes to an hour to days or years on your local CPU,” Costa said. “So it really provides seamless access to the acceleration of the Nvidia quantum platform for any quantum programmer.”
While the offering in its early-access phase is centered around GPUs, Nvidia Quantum Cloud also will eventually broaden to include back-end support for QPUs from Nvidia partners. “We want to really remove any barriers to access here. So if you’re doing quantum research on Nvidia GPUs, using them as a quantum resource for emulation and simulation, that’s available day one in this early-access program,” Costa said. “But long-term, we want to bring our partners in and of course we’re focused on the integration of quantum and classical together, so our GPUs and Nvidia Quantum Cloud will work with the QPUs we provide through our partners as a part of that back-end support.”
Quantum Cloud also features powerful capabilities and third-party software integrations to accelerate scientific exploration, including:
- The Generative Quantum Eigensolver, developed in a collaboration with the University of Toronto, leverages large language models (LLMs) to enable a quantum computer to find the ground-state energy of a molecule more quickly.
- Classiq’s integration with CUDA-Q allows quantum researchers to generate large, sophisticated quantum programs, as well as to deeply analyze and execute quantum circuits.
- QC Ware Promethium tackles complex quantum chemistry problems such as molecular simulation.
Supercomputer projects
In addition to the new cloud service, Nvidia announced involvement in two quantum-focused supercomputer projects. The first and larger of the two is the ABCI-Q supercomputer at Japan’s National Institute of Advanced Industrial Science and Technology. When finished, it will be one of the largest supercomputers dedicated to research in quantum computing, with more than 2,000 of Nvidia’s H100 GPUs and more than 500 nodes connected by InfiniBand and powered by the CUDA-Q platform.
The second supercomputer will be in Denmark, where the Novo Nordisk Foundation will lead on the deployment of a DGX SuperPOD, Nvidia’s AI data center infrastructure. Costa said a significant part this machine will be dedicated to research in quantum computing in alignment with Denmark’s national plan to advance the technology.
These deployments are similar to Nvidia’s role in Australia’s Pawsey Supercomputing Research Centre. Nvidia and Pawsey announced last month that the supercomputer at the National Supercomputing and Quantum Computing Innovation Hub there will run CUDA-Q on Nvidia’s Grace Hopper Superchips.
PQC support
Also, on the quantum front this week from Nvidia: As the National Institute of Standards and Technology prepares to finalize its initial post-quantum cryptography standard algorithms later this year, Nvidia is putting its GPU computing resources into play to help make PQC use more practical. The company is releasing a software library called CuPQC that contains the mathematical primitives that will be required to implement quantum-safe encryption in a practical way, as some computing infrastructure may run these algorithms too slowly for them to be effective, Costa said. “We’re looking to enable this community to not only accelerate today’s algorithms–those being standardized by NIST–but also to develop and do research into more quantum-safe algorithms with the performance of Nvidia GPUs behind them so that they can be practical to implement.”
CUDA-Q Academic
Finally, Nvidia also said this week it has been working with many universities to develop a series of modules and coursework to help a broad range of students and professionals from different sectors to understand how to interface and work with quantum computers. This modules and other content will now be available under the name CUDA-Q Academic, both through the universities and digitally.
Dan O’Shea has covered telecommunications and related topics including semiconductors, sensors, retail systems, digital payments and quantum computing/technology for over 25 years.