NVIDIA sets record for quantum computing simulation with cuQuantum
(HPCWire) NVIDIA just broke a record with big impact in the world of quantum computing, and it’s making its software available so anyone can do this work.
NVIDIA has created the largest ever simulation of a quantum algorithm for solving the MaxCut problem using cuQuantum, NVIDIA’s SDK for accelerating quantum circuit simulations on a GPU.
In the math world, MaxCut is often cited as an example of an optimization problem no known computer can solve efficiently. MaxCut algorithms are used to design large computer networks, find the optimal layout of chips with billions of silicon pathways and explore the field of statistical physics.
MaxCut is a key problem in the quantum community because it’s one of the leading candidates for demonstrating an advantage from using a quantum algorithm.
NVidia used the cuTensorNet library in cuQuantum running on NVIDIA’s in-house supercomputer, Selene, to simulate a quantum algorithm to solve the MaxCut problem. Using GPUs to simulate 1,688 qubits, we were able to solve a graph with a whopping 3,375 vertices. That’s 8x more qubits than the previous largest quantum simulation. The solution was also highly accurate, reaching 96% of the best known answer.
The breakthrough opens the door for using cuQuantum on NVIDIA DGX systems to research quantum algorithms at a previously impossible scale, accelerating the path to tomorrow’s quantum computers.
The first library from cuQuantum,, cuStateVec, is in public beta, available to download. It uses state vectors to accelerate simulations with tens of qubits.
The cuTensorNet library that helped us set the world record uses tensor networks to simulate up to hundreds or even thousands of qubits on some promising near-term algorithms. It will be available in December.
Learn more about cuQuantum’s partner ecosystem here.