MLCommons releases record results from its MLPerf Inference v2.1 open-source benchmark.
MLCommons, an open engineering consortium, announced the latest results from its MLPerf benchmark late last week. MLPerf Inference v2.1 is an industry standard suite of benchmarks used to measure machine learning inference performance for a variety of models and software on data center and edge systems.
The biggest takeaway from the new results is the growth in the benchmark’s use. Compared to the last result release in July, there were 37 percent more performance results and 9 percent more power measurement results—for a total of 5,300 and 2,400 results, respectively.
“We are very excited with the growth in the ML community and welcome new submitters across the globe such as Biren, Moffett AI, Neural Magic, and SAPEON,” said MLCommons Executive Director David Kanter in a news release. “The exciting new architectures all demonstrate the creativity and innovation in the industry designed to create greater AI functionality that will bring new and exciting capability to business and consumers alike.”
Other major industry players using the MLPerf Inference benchmark include Alibaba, ASUSTeK, Azure, Dell, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Intel, Krai, Lenovo, Nettrix, NVIDIA, OctoML, Qualcomm, and Supermicro, according to MLCommons.
To guarantee a level playing field among these competitors and drive innovation and performance for the entire industry, the MLPerf benchmark suite is open-source and peer-reviewed. It incorporates a variety of machine learning use cases, including medical imaging, natural language processing, speech recognition, recommendations, speech-to-text, object detection and image classification.
NVIDIA, for instance, has been using MLPerf to test its graphics processing unit (GPU) accelerators. Chief among them is the H100, a GPU accelerator that the company debuted in March 2022. According to NVIDIA, the H100 performs up to 4.5 times faster than the previous-generation A100 as measured with MLPerf Inference.
There are few technologies that can deliver the extensive capabilities of machine learning. As more enterprises leverage those capabilities in different ways, there is a greater need for engineers to improve machine learning systems by understanding their performance characteristics.