HPC Closing the Gap with Quantum Computing Advantage

Research from HPE found supercomputers can solve a complex problem previously thought to require a quantum solution.

In 2019, Google’s quantum computer, Sycamore, blew the world’s best supercomputer out of the water when it solved a problem 158 million times faster than a classical computer. Although many companies are investing in quantum technology, it will still be years before quantum computing is used for general computing purposes. The issues include producing the number of qubits needed to perform calculations and maintaining that system of qubits.

As quantum computers continue to develop, supercomputers are far from being labeled outdated platforms. High-performance computing (HPC) facilities continue to improve supercomputer technology to support cloud computing and advanced artificial intelligence (AI)/machine learning (ML) applications for every industry. HPC companies are invested in differentiating their infrastructure from quantum applications to ensure they remain critical for complex computing solutions in the years to come.

Recently, Hewlett Packard Enterprise (HPE) demonstrated that a 100,000-core supercomputer could solve a quantum-based problem in significantly less time than previously predicted. It’s a major milestone for the future of hybrid computing, where classical and quantum hardware can complement each other to solve increasingly complex problems.

HPC and AI: An Intertwined Future

In the traditional view, AI algorithms are used to process, model and predict data, and HPC hardware is used to physically crunch the numbers. Both benefit from one another. AI can help HPC optimize workflows, and HPC can accelerate the training of ML models for AI applications.  

In 2021, HPE commissioned Forrester Consulting to conduct a survey assessing how AI and HPC integrate across global stakeholders. Business decision-makers relying on AI or HPC and experts in AI and HPC were surveyed by the consulting group as part of the study. The resulting report found that most companies need to improve their computing infrastructure to achieve their AI and advanced data analytics goals (37 percent immediately and 45 percent in the near future). Specifically, with AI experts, eight in 10 individuals surveyed said they would need to improve their infrastructure to meet their future AI goals.

To make these improvements, 67 percent of respondents agreed that unifying AI and HPC applications could reduce costs associated with advanced data analytics. Furthermore, 72 percent of HPC experts plan to use more flexible infrastructure to better support both HPC and AI workloads in the next five years.

AI and HPC can also be used to inform one another. AI-augmented HPC includes physics-informed neural networks to either support or replace traditional HPC simulations for applications in CFD, molecular dynamics and mechanical design. For example, CERN accelerated its deep learning inference using AI and gained 1.8x speed compared to Monte Carlo-based simulations.

Quantum Computing Won’t Negate Current and Future Supercomputers

In December 2020, researchers from the University of Science and Technology of China (UTSC) and the Chinese Academy of Sciences performed Gaussian boson sampling (GBS) using quantum computing.

GBS involves the measurement of photons from a highly entangled space and was previously considered an experiment that supported quantum primacy. In Scientific American, Daniel Garisto describes an excellent analogy for boson sampling, the bean machine—or as fans of The Price is Right would say, Plinko. This toy involves a pegboard covered with clear glass, where balls are dropped from the top. As the balls move down, they bounce off pegs and land in random slots at the bottom, sometimes resulting in a winning prize.

A bean machine.

A bean machine.

On a classical computer, it is easy to simulate how the balls will fall. With boson sampling, the balls are replaced with photons, and the pegs are replaced with prisms and mirrors. Lasers generate photons that bounce off mirrors, pass through prisms and ultimately land in a slot to be detected. The quantum mechanics of photons result in a complex number of possible distributions, making the experiment challenging to simulate with classical computers.

In their experiment, the researchers at UTSC and their quantum computer—Jiŭzhāng—detected 76 photons, a significant increase on the previous record of five photons. For a classical computer to simulate the same experiment, the researchers predicted it would take nearly 600 million years using the world’s largest supercomputer. By using the photons themselves as qubits, the researchers’ approach differed from other quantum computers, like Google’s Sycamore, which relies on superconducting loops of metal to form qubits and needs to operate at temperatures close to absolute zero. This is an interesting development for the potential of quantum computers.

In contrast to the previous understanding of GBS, a team from HPE’s HPC and AI Business Group, along with Hewlett Packard Labs and researchers at Imperial College London and the University of Bristol, demonstrated competitive simulations of the same experiment using a 100,000-core supercomputer. In contrast to the predictions of the UTSC researchers, the HPE group calculated it would take only 73 days to complete a GBS simulation on Japan’s Fugaku supercomputer. With future exascale supercomputers, the researchers predicted this could be decreased to about three weeks.

“Today’s research, a result of a strong collaboration between teams at HPE, University of Bristol and Imperial College London, was inspired by the leading edge of quantum computing development to extend the value that supercomputing delivers, when combined with optimized algorithms, to accurately compare computational advantage between classical computers and quantum computers, and set new standards of performance,” said Justin Hotard, senior vice president and general manager, HPC and AI at HPE. “We look forward to furthering this effort by partnering with the quantum computing community and integrating the HPE Cray supercomputer product line with other enabling technologies to advance the journey to developing future quantum computers.”

With this new research, the team at HPE demonstrated that supercomputers can be used to support quantum computing experiments and help bridge the gap between the two technologies. The researchers have shown that even experiments thought to support quantum primacy can be simulated using classical supercomputers, furthering the competition between the two computing strategies and pointing experts toward a future where both platforms will support one another.

A Hybrid AI Future

When comparing computers, many companies are focused on which platform will help them meet their big data goals. This is especially true for Industry 4.0 and the widespread adoption of AI-based data analytics that will support everything from customer relations to manufacturing in the near future.

Although HPC can be used to support AI-based goals of both industry and academia, there are many areas of AI that stand to gain from quantum computing. For example, a major limitation of current AI models is they are only as good as the dataset used for their training. Future applications will likely require generative models to expand datasets and produce more robust AI solutions. Although current computers can make generative models, quantum computing can increase the quality and variety of the generated data, such as images, medical scans or simulated molecules. As a result, many HPC centers are looking to expand their hardware offerings by investing in quantum computing over the next five years.

Some experts say that future advancements will likely rely on hybrid approaches to computing, where companies take advantage of the flexibility of HPC in the cloud and the powerful, specific nature of quantum hardware. Many believe that too much focus is put on comparing classical and quantum computing and speculating over which hardware will dominate the future of computing. The future likely looks more complex, with a combination of both technologies used to support big data projects and AI/ML applications.

With its new research, HPE showcases how HPC can complement and validate quantum computing solutions.