GPU, Reinvented: Graphics, AI and the Astonishing New Era of Simulation

Thanks to AI, simulation as we know it may be dead within five years—and graphics cards are accelerating the revolution.

It all started with video games. And pancakes.

It was the 1990s, and games were undergoing the biggest change since the birth of the medium with 1972’s Pong. That 2D classic, its graphics no more than a pinch of white pixels, made way for works of art with colors and characters, style and story. As video games grew up, so did computer graphics. But there was still one big leap to make, and that was into the third dimension.

At a Denny’s diner in 1993, three electrical engineers took a leap of their own. Recognizing the computational requirements of 3D video games would only go up—as would their sales numbers—the trio of chip designers saw an opportunity. Jensen Huang, Chris Malachowsky and Curtis Priem decided then and there to start a company that would make graphics processing units, or GPUs. They called it Nvidia.

“Video games was our killer app—a flywheel to reach large markets funding huge R&D to solve massive computational problems,” Huang reflected in a 2017 profile in Fortune, which had declared him their Businessperson of the Year. As Nvidia’s CEO, Huang had taken that bold idea from a dingy roadside diner and turned into a multi-billion-dollar tech behemoth.

Nvidia’s GPU-based graphics cards were a quick hit with video gamers, but Huang had more in mind than amusement. As the flywheel turned, graphics cards evolved to become a core tool for visualization professionals who rely on 3D modeling, including engineers, architects and other CAD and CAE users. Today’s most powerful engineering desktops include not just one, but two, three or even four graphics cards.

A Dell Precision engineering workstation equipped with three Nvidia graphics cards. (Image: Dell.)

A Dell Precision engineering workstation equipped with three Nvidia graphics cards. (Image: Dell.)

The evolution of graphics cards has been a fascinating journey, and it’s far from over. As important as GPUs have been to engineers, they’re about to become even more crucial. That’s because GPUs are in the midst of another transformation, based on one of the most exciting technologies of our time: artificial intelligence (AI). GPUs are quickly becoming the processor of choice for AI applications, promising massive speedups over general-purpose CPUs. GPUs will prove essential as engineering software increasingly pivots to AI—and nowhere is this more apparent than in simulation, which is in the midst of its own rapid transformation.

Why GPUs are such a good fit for AI

Computational graphics and artificial intelligence have an important common factor: the matrix. Not the Keanu Reeves movie (though come to think of it, that works too) but the mathematical array. Deep learning, the most common type of AI in use today, requires a lot of matrix multiplication. It’s the exact same math you’d use to rotate a 3D object, and that’s why GPUs are designed with a highly parallel architecture that’s perfect for exactly these calculations. A central processing unit (CPU) can do them too—but much less efficiently.

Nvidia claims that its A100 GPU can accelerate AI inference on the BERT large language model by 249x compared to a CPU. (Image: Nvidia.)

Nvidia claims that its A100 GPU can accelerate AI inference on the BERT large language model by 249x compared to a CPU. (Image: Nvidia.)

2016 was a landmark year for AI. Google DeepMind’s AlphaGo program made headlines when it bested one of the world’s top Go players, Lee Sedol, in a definitive 4-to-1 series. This was even bigger news than when chess grandmaster Garry Kasparov lost to IBM’s Deep Blue in 1997. Chess was hard for computers to solve, but Go was much, much harder. As the DeepMind researchers pointed out at the time, the computational search space for Go is more than a googol (10100) times larger than that for chess.

It was an exciting time for AI research, and for graphics cards too. In 2017, Nvidia released its first GPU with Tensor cores, a processing element designed to accelerate machine learning applications beyond what its graphics-focused CUDA cores could do (in simple terms: even faster matrix multiplication). Other GPU-makers, including AMD and Intel, followed suit with dedicated AI acceleration in their chips as well. As GPUs evolved to facilitate AI, they also began to benefit from it. Today’s cutting-edge graphics cards use AI to make crisper, clearer images with higher resolutions and framerates, and the field of neural graphics is in full swing.

Simulation software was also adapting to the increasing power of GPUs. In 2018, Ansys unveiled an exciting new application called Discovery Live, which, for the first time, allowed real-time simulation thanks to its GPU-based solvers. But that was just the beginning. The stage was set for the rise of AI and the new era of simulation.

Simulation from the bottom-up

Go players may have been impressed by AI back in 2016, but by 2023 almost everyone has felt a share of AI awe. The recent burst of generative AI tools like ChatGPT for text and Midjourney for images has shown that computers can be creative in a way most didn’t think possible. It won’t be long before we start seeing similarly awe-inspiring AI simulation tools.

“Once it works, it will completely obviate the need for FEA [finite element analysis],” Prith Banerjee, chief technology officer at Ansys, told engineering.com.

Banerjee was explaining an exciting approach to AI-based simulation that his company and others are actively pursuing. He calls it the bottom-up approach, and it seeks to develop physics-based AI models that can generate simulation results while bypassing the mathematical equations (like FEA and CFD) that underpin current solvers.

Much like Midjourney takes in a user’s prompt and generates an image, future simulation software could take in a CAD model and loading conditions and generate an analysis of its behaviour in a fraction of the time it would take traditional solvers. That speedup could be on the order of 100x just due to the AI algorithms alone. Add another 10x to 100x if you run it on GPUs, and you can see how revolutionary this approach could be.

The Nvidia A100 GPU. (Image: Nvidia.)

The Nvidia A100 GPU. (Image: Nvidia.)

“The bottom-up method is the real end solution… but it’ll take a long time to make it work across all types of physics solvers, structural solvers, electromagnetic solvers and so on,” Banerjee says. How long is anybody’s guess, but Banerjee thinks bottom-up simulation could arrive in the next five years.

The top-down approach is already here, and engineers are embracing it

In the meantime, engineers are already seeing AI impact their simulations thanks to the opposite approach, called top-down. While bottom-up is an attempt to create a general-purpose simulation AI trained on the underlying physics, top-down AI targets specific problems based on narrow training data.

“Suppose you’ve trained aerodynamics on a Honda Civic,” Banerjee explains. “Next time you do another airflow on a Honda Civic, it will work beautifully. But if you give it a Volvo truck, it will not work right, because it hasn’t seen that structure before.”

This top-down approach can be applied to any type of simulation problem, but as soon as the problem is tweaked, top-down simulation breaks down and the AI must be retrained. While this is clearly much more limited than bottom-up simulation, it’s also easier to develop—and that’s why many simulation companies have already begun commercializing it.

For example, take physicsAI, a recent tool from simulation provider Altair. Integrated within the company’s HyperWorks platform, physicsAI is trained on historical simulation data provided by users. The machine learning software can then quickly predict outcomes for similar simulations directly on CAD or mesh data, according to Fatma Koçer, vice president of Altair’s engineering data science team.

“An engineer can just drag-and-drop a new design,” Koçer explained to engineering.com in a demo involving an HVAC design. “And then instead of running the solver, they can click on Predict and it will be 10 – 20 seconds to get the entire pressure contour, as opposed to however long the simulation would have run.”

Depiction of how Altair physicsAI is trained on historical simulation data to quickly predict new outcomes. (Image: Altair.)

Depiction of how Altair physicsAI is trained on historical simulation data to quickly predict new outcomes. (Image: Altair.)

That speed is key. Koçer says that physicsAI is meant for design exploration, making it much easier for engineers to quickly check a variety of options and dedicate the bulk of their time to the most promising results. As you may expect, the speedy AI can be sped up even further with our old friend the GPU. In one of her tests, Koçer reported a whopping 14-fold speedup from an 8-core laptop CPU to an Nvidia A100 40GB GPU, one of Nvidia’s brawniest chips. A more modest laptop GPU, the NVIDIA RTX A4000, provided an 8-fold speedup for training physicsAI.

The astonishing new era of simulation

AI is having a real impact on CAE today. In addition to top-down solvers, AI is also automating repetitive tasks like data preparation and even learning how to emulate expert preferences on subjective design criteria. In as little as five years, bottom-up AI solvers may redefine what it means to run a simulation, and drastically speed it up in the process. But even that may just be the tip of the iceberg.

The current wave of generative AI, like ChatGPT, offers a taste of what’s to come. Hesitant to put too fine a label on it, Koçer uses the phrase “Generative X.” While a tool like physicsAI can rapidly evaluate a new CAD model, future AI-based tools might just go ahead and generate the model themselves. It won’t be like the current crop of so-called generative design software, which mostly uses non-AI methods to create 3D geometry. It’ll be something better, faster and frankly unknowable until we get there. Whatever it is and whenever it comes, engineers will be eager to try it out.

“The top question [I’m asked] is how are you bringing AI/ML to simulation,” Banerjee says. “This is the top thing on everybody’s mind, both from researchers and practitioners.”

If you’re tempted to discard it as a fad, think again. Banerjee has seen disruptive technologies come and go for over four decades of his distinguished career, but when it comes to AI, the computer scientist has no doubt.

“It is not hype, this is the real thing,” Banerjee says. “I’m particularly excited about the technology of AI/ML applied to simulation and product innovation and the role of GPUs in supporting that disruption.”

Update (08/23/2023): Added test data on training physicsAI with a laptop GPU (RTX A4000). The article originally included this data only for a data center GPU (A100 40GB).

Written by

Michael Alba

Michael is a senior editor at engineering.com. He covers computer hardware, design software, electronics, and more. Michael holds a degree in Engineering Physics from the University of Alberta.