In a special address at the Vancouver graphics conference, NVIDIA leaders presented their exciting vision of the metaverse.
Computer graphics conference SIGGRAPH is ongoing this week in Vancouver, Canada, and NVIDIA executive Rev Lebaredian believes it will go down in history as one of the most important in the conference’s nearly 50-year run. During a media briefing last week, the vice president of Omniverse and simulation technology described the present moment as an inflection point for computer graphics technology, comparable to the explosion of CGI set off by the trailblazing 1993 blockbuster Jurassic Park.
Today, the fuse is being lit by the so-called metaverse, a new type of internet composed of virtual 3D worlds.
Lebaredian was one of several NVIDIA executives that (virtually) took the SIGGRAPH stage today alongside founder and CEO Jensen Huang for a special address outlining how the company is preparing for the metaverse through a combination of artificial intelligence (AI) and computer graphics.
Neural Graphics
Though the NVIDIA brand is synonymous with graphics cards, the company has long been infatuated with AI—and for the past few years, it’s been building a bridge between both worlds. In 2017 NVIDIA’s Volta GPU microarchitecture introduced Tensor Cores, specialized processing units for machine learning. Today, NVIDIA sees such a close link between AI and graphics that it’s coined the term “neural graphics” to describe it.
“Neural graphics intertwines AI and graphics, paving the way for a future graphics pipeline that is amenable to learning from data… Neural graphics will redefine how virtual worlds are created, simulated and experienced by users,” said Sanja Fidler, vice president of AI at NVIDIA, during the special address.
Take NVIDIA Instant NeRF (neural radiance fields), a machine learning algorithm that uses NVIDIA graphics cards to rapidly construct three dimensional scenes from a series of two dimensional photographs taken at different angles. The research was presented in “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding,” one of 16 papers NVIDIA submitted to SIGGRAPH. It won one of the conference’s five Best Paper awards.
Another Best Paper award went to NVIDIA’s “Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency,” which uses neural graphics to predict and alter eye reaction times in AR/VR applications.
NVIDIA announced two neural graphics SDKs at SIGGRAPH: Kaolin Wisp, which provides tools to quickly create AI models representing 3D objects; and NeuralVDB, an extension of the OpenVBD library which dramatically reduces the memory requirements of sparse volumetric data sets such as simulations of water, fire, smoke and clouds.
All About Avatars
Huang strongly believes that the future metaverse will be rife with virtual assistants called avatars (think Siri, but in 3D). He’s even got one of his own nicknamed Tiny Jensen. Today, his company announced Omniverse Avatar Cloud Engine, or ACE, a set of AI models and services that customers will be able to use to build their own avatars. For example, NVIDIA’s Audio2Face tool takes voice input and turns it into facial animation.
“With Omniverse ACE, developers can build, configure and deploy their avatar application across any engine in any public or private cloud. ACE will democratize the ability to build and deploy realistic AI-driven intelligent avatars,” said Simon Yuen, director of graphics and AI at NVIDIA, in the SIGGRAPH address.
ACE will be available early next year, according to an NVIDIA blog post coinciding with the special address.
Omniverse for the Metaverse
And what about the Omniverse of it all? Naturally, the NVIDIA executives were eager to point out that their 3D collaboration platform is just the sort of thing that’s needed to flesh out the metaverse, and it has some major updates.
Omniverse is expanding with new links to third-part software, called Omniverse Connectors. Connectors are now in development for Alias, Blender, Siemens JT, SimScale, Unity, and more, and there are Connectors in beta for Creo, Houdini, and Visual Components.
NVIDIA also enhanced its Material Definition Language (MDL), used to create physically accurate materials for simulation through NVIDIA’s PhysX engine, which now supports soft-body and particle-cloth simulation. NVIDIA announced it is fully open-sourcing MDL so it can support popular graphics APIs including Vulkan and OpenGL. A new Omniverse feature called DeepSearch will use AI to search through databases of 3D assets, and an Omniverse extension for NVIDIA Modulus will use AI to accelerate physics simulations by up to 100,000x.
Omniverse will also support NVIDIA’s neural graphics research, including Instant NeRF and GauGAN360, which generates 360-degree panoramas in 8K resolution that can be loaded into an Omniverse scene—all from a brief sketch or text description.
Finally, NVIDIA is doing all it can to advance Universal Scene Description (USD), the Pixar-created open-source standard underpinning Omniverse that NVIDIA constantly likens to “the HTML of 3D.” NVIDIA plans to release a USD compatibility and certification suite as well as a set of USD assets designed for industrial digital twins and AI training workflows.
“Our next milestones aim to make USD performant for real-time, large-scale virtual worlds and industrial digital twins,” Lebaredian said.
Huang capped off the special address by reiterating his excitement for the next era of the internet.
“The announcements we made today further advance the metaverse, a new computing platform with new programming models, new architectures, and new standards. Our industry is set for the next wave. The combination of AI and computer graphics will power the metaverse, the next evolution of the internet,” Huang said.