Scaling quantum computers: Hewlett Packard Enterprise and NVIDIA tackle distributed quantum computation
Researchers are developing new methods for distributing quantum workloads that could enable scaling quantum computers via HPC interconnects
- For the foreseeable future, it is unlikely quantum computers will represent more than 100 logical qubits or variables, but practical problems often involve 10,000 variables or more
- Scaling quantum computers from hundreds to millions of physical qubits must be the focus of the industry for the coming years
- HPE is demonstrating solutions for distributed quantum simulation with NVIDIA
How do we build quantum computers capable of solving useful problems? Last fall, Google announced an experiment on their 105-qubit superconducting device Willow that demonstrated a single error-corrected logical qubit. While this work is a technological milestone, it also shows us just how far the industry still has to go—resource estimates for real-world applications like materials science and chemistry suggest we will need tens of millions of qubits. To reach utility scale, the focus now must be on how to advance from hundreds to millions of qubits.
The first step is clearly laying out all the technical challenges to scaling. This must address not only quantum hardware developments, but systems integration and software. A clear problem on this front is that individual quantum processors (QPUs) will be limited in size, and most likely much smaller than a million qubits. We must then devise ways of networking many, maybe hundreds, of QPUs into a larger system. This will require tight integration with high-performance computing (HPC), and eventually quantum interconnects to share information between QPUs.
“The path to scalability will require distributed quantum computing and tight integration with HPC at many layers in the stack,” said Masoud Mohseni, a distinguished technologist who leads the quantum computing team at Hewlett Packard Labs.
“The path to scalability will require distributed quantum computing and tight integration with HPC at many layers in the stack.”
Quantum interconnects are in the early stages of development and likely will not be available for some time to come. To network multiple QPUs in the near term, we can use HPC interconnects like Slingshot—as long as we have scalable methods distributing quantum workloads using only classical communication.
But partitioning quantum workloads is a challenging problem. Quantum entanglement—long-range correlation between qubits that are not necessarily close in physical space—exists and is in fact fundamental to the power of quantum computing. But these quantum correlations can be difficult to characterize, and for a given problem it may not be known beforehand which correlations are important to keep (i.e., which qubits should be located on the same QPU) and which are safe to ignore (a good place to partition).
This is where problem selection will play a part in shaping a holistic co-design of quantum-HPC systems. While we can imagine a future general-purpose quantum computer, one where every qubit can reliably interact with every other qubit, in real problems from physics and chemistry the entanglement is not uniform. There is an inherent structure that could be exploited so that just the hardest parts of the problem are solved on the best available quantum hardware. Perhaps in the future we will have the luxury of doing basic arithmetic on qubits, but until then we will want to use classical HPC and quantum computing together.
Based on this idea, HPE is developing a set of techniques called Adaptive Circuit Knitting that can determine efficient places to partition quantum workloads on-the-fly as the quantum computation is carried out. These approaches have shown 1-3 orders of magnitude improvement in computational overhead over simple partitioning schemes, and could pave the way to scalable, distributed quantum simulation using only classical HPC interconnects between QPUs. The team at HPE has partnered with NVIDIA to test their algorithms at scale with GPU-accelerated quantum circuit simulation enabled by the NVIDIA CUDA-Q platform. CUDA-Q not only allows ideas to be developed by running world’s-fastest simulations of quantum workloads large enough to warrant Adaptive Circuit Knitting, but it also provides a platform built for the hybrid workflows needed to implement circuit knitting – running the fast classical computations alongside QPUs that actively determine where to make circuit partitions.
This week at GTC Quantum Developer Day, HPE and NVIDIA are showcasing their latest results on 40-qubit simulations of quantum spin systems. To validate the Adaptive Circuit Knitting approach, they used NVIDIA CUDA-Q platform to carry out reference simulations on the HPE Cray EX supercomputer Perlmutter. These simulations took an average of just 24 minutes, but ran across an impressive 256 nodes with 1,024 NVIDIA A100 GPUs. Because simulating quantum circuits is exponentially costly, these simulations are close to the largest that can be performed with classical supercomputing.
“Scaling quantum hardware is the core challenge facing the quantum computing community today. HPE and NVIDIA are working together on novel algorithmic approaches to solving this challenge.”
“Scaling quantum hardware is the core challenge facing the quantum computing community today. HPE and NVIDIA are working together on novel algorithmic approaches to solving this challenge,” said Elica Kyoseva, director of Quantum Algorithm Engineering at NVIDIA. “Adaptive Circuit Knitting offers a promising route to large-scale, useful quantum computing through quantum workload distribution on smaller quantum devices, similar to parallelization in classical computing. Understanding and realizing these promising methods today rests fundamentally on fast AI supercomputing.”
HPE and NVIDIA’s joint work showcases the power of using CUDA-Q and classical HPC to simulate quantum systems, especially in the near-term as QPUs are limited in size. But with Adaptive Circuit Knitting techniques it also provides a path for scaling quantum computers with high-performance hardware that is already available today—that is, without reliance on futuristic quantum interconnects. By developing these techniques today, we place a stepping-stone toward large-scale, distributed quantum computing systems that include both classical and quantum interconnects.
Learn more:
- Position paper led by HPE: How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions of Qubits
- Technical blog post on Adaptive Circuit Knitting: Toward Distributed Quantum Simulation
- HPE’s invited talk at NVIDIA GTC Quantum Developer Day by Masoud Mohseni