The Hardware in Microsoft’s OpenAI Supercomputer Is Insane
Andrew Wheeler posted on June 02, 2020 |
The benefit to Elon Musk’s organization is not yet clear.
(Image courtesy of Microsoft.)
(Image courtesy of Microsoft.)

OpenAI, the San Francisco-based research laboratory founded by serial entrepreneur Elon Musk, is dedicated to “ensure that artificial general intelligence benefits all of humanity.” Microsoft invested $1 billion in OpenAI in June 2019 to build a platform of “unprecedented scale.” Recently, Microsoft pulled back the curtain on this project to reveal that its OpenAI supercomputer is up and running. It’s powered by an astonishing 285,000 CPU cores and 10,000 GPUs.

The announcement was made at Microsoft’s Build 2020 developer conference. The OpenAI supercomputer is hosted by Microsoft’s Azure cloud and will be used to test massive artificial intelligence (AI) models.

Many AI supercomputing research projects focus on perfecting single tasks using deep learning or deep reinforcement learning as is the case with Google’s various DeepMind projects like AlphaGo Zero. But a new wave of AI research focuses on how these supercomputers can perfect multiple tasks simultaneously. At the conference, Microsoft mentioned a few of these tasks that its AI supercomputer could tackle. These include having the company’s AI supercomputer possibly examine huge datasets of code from GitHub (which Microsoft acquired in 2018 for $7.5 billion worth of stock) to artificially generate its own code. Another multitasking AI function could be the moderation of game-streaming services, according to Microsoft.     

But is OpenAI going to benefit from this development? How would these services use Microsoft’s OpenAI supercomputer?

Users of Microsoft Teams benefit from real-time captioning via Microsoft’s development of Turing models for natural language processing and generation, so maybe OpenAI will pursue more natural language processing projects. But the answer is unknown at this point.

(Video courtesy of Microsoft.)

Bottom Line

Large-scale AI implementations from powerful and ultra-wealthy tech giants like Microsoft with access to tremendous datasets (this is the key for advanced AI beyond powerful software) could lead to the development of an AI programmer using the vast repositories of code on GitHub.

Microsoft’s Turing models for natural language processing use over 17 billion parameters for deciphering language. The number of CPUs and GPUs in Microsoft’s AI supercomputer is almost as staggering as the potential applications the company could create with access to such vast computing power. On that one note, Microsoft announced that its Turing models for natural language generation will become open source for human developers to use in the near future, but no exact date has been given.

Recommended For You