Despite the attractiveness of GPU & Cloud, firms that require serious HPC numbers crunching
Importance of HPC to Simulation
Though Cloud and GPU computing is growing in popularity, many organizations, especially those that have really big problems to solve, are more comfortable sticking to an in-house high performance computing (HPC) infrastructure.
HPC technology uses thousands of CPUs simultaneously to do simulations. In that way, it is similar to GPU and Cloud computing and is a tool to speed up your simulations. But by keeping the data crunching in house, organizations that deal with sensitive information can keep a closer eye on their IP. It is therefore important that simulation companies continue to improve the scalability and performance of their software on HPC systems.
STAR-CCM+ Reaches Scalability to 55,000 Cores
Recently, CD-adapco announced that their CFD simulation software, STAR-CCM+, maintained scalability beyond 55,000 cores. For years, STAR-CCM+ has been testing, benchmarking and determining the performance of their software on petascale (quadrillion, or 1015 floating point operations per second) computing systems.
The simulation software was run on a 1.045 petaflop Hermit cluster. The experiments involved benchmark simulations with as many as 500 million to 2 billion cells. The geometry was not split into blocks, and the mesh was generated automatically using parallel meshing.
These tests were conducted with CD-adapco partners SICOS BW and HLRS (HPC Center Stuttgart).
“Performing CFD analysis on petascale systems presents a number of interesting difficulties,” said Uwe Küster, head of the Numerical Methods & Libraries of HLRS. “Simply running the simulation is only half the battle and so issues such as data management, mesh generation and results visualization will also be studied. We are interested in how clusters, such as Hermit, may be used to solve real engineering problems.”
“The ability to perform simulations with such massive core counts is a real breakthrough for the CAE industry, and a direct result of our close collaboration with HLRS,” said Jean-Claude Ercolanelli, senior vice president of product management at CD-adapco. “In under a year, this project has allowed STAR-CCM+ to effectively run, and scale, on over 50,000 cores. That is very encouraging, and I am excited to see what other breakthroughs we can make,” he added. “The results allow STAR-CCM+ users to gain maximum utility from all of the computing resources that are available to them.”
CD-adapco isn’t the only simulation company to team up with HLRS. Recently, ANSYS and HLRS offered improved HPC computing power to academics around Europe. In that announcement, ANSYS mentions that their simulation technology could currently scale to over 20,000 processors, less than half of STAR-CCM+. None the less, it is clear that simulation giants are not forgetting about HPC technology. It will be interesting to see who ends up on top of the scalability and how many organizations will have the HPC capacity to see those improvements as cloud and GPU computing becomes more popular.
What do you think? How big is your HPC cluster? Will you ever get to the petascale? Or will you go for the cloud and GPU options? Please use the comments section below to let us know.