Score for AI Graphics: NVIDIA Gets Ready for SIGGRAPH 2023

NVIDIA will bring twenty research papers to the premiere graphics conference detailing new techniques in generative AI and neural graphics.

NVIDIA today revealed its plans to showcase twenty research papers on generative AI and neural graphics at SIGGRAPH 2023, the upcoming computer graphics conference taking place from August 6 to 10 in Los Angeles.

With applications for engineers, designers, developers, artists and more, NVIDIA’s latest research focuses on producing higher quality visuals much more quickly than ever before.

“The research advancements presented this year at SIGGRAPH will help developers and enterprises rapidly generate synthetic data to populate virtual worlds for robotics and autonomous vehicle training,” reads an NVIDIA blog post outlining the company’s SIGGRAPH plans.

A good hair day

One of NVIDIA’s marquee research projects introduces a new approach to a complicated problem: simulating the roughly 100,000 strands of hair on an average human head, which has long caused a headache for 3D animators. NVIDIA says it’s found a method based on neural physics that can “simulate tens of thousands of hairs in high resolution and in real time.”

Simulating hair with neural physics. (Source: NVIDIA.)

Simulating hair with neural physics. (Source: NVIDIA.)

The novel approach is optimized for GPUs and significantly outperforms CPU-based solvers, according to NVIDIA, which says it reduces hair simulation time from days to hours while boosting quality.

Photorealistic digital twins

In another of the twenty papers, NVIDIA will present a technique for neural texture compression that the company says delivers 16x more texture detail without requiring any extra GPU memory. That enables higher realism in 3D scenes. So does another paper on neural materials research, which uses AI to learn how light reflects from photoreal and multi-layered materials.

“NVIDIA research shows how AI models for textures, materials and volumes can deliver film-quality, photorealistic visuals in real time for video games and digital twins,” reads the NVIDIA blog.

NVIDIA’s neural material model learns how light reflects from photoreal reference materials as seen in this neural-rendered ceramic teapot with a clear-coat glaze. (Source: NVIDIA.)

NVIDIA’s neural material model learns how light reflects from photoreal reference materials as seen in this neural-rendered ceramic teapot with a clear-coat glaze. (Source: NVIDIA.)

Much more to come

Other select NVIDIA research papers heading to SIGGRAPH will describe:

  • An AI model called Perfusion that uses a handful of concept images to personalize text-to-image output
  • A method of rendering a photorealistic 3D head-and-shoulders model from a single 2D portrait, enabling AI-based 3D avatars and video conferencing
  • An AI-enabled technique for data compression that reduces the memory needed for volumetric data by 100x

NVIDIA will present this and other research at SIGGRAPH alongside six courses, four talks and two emerging technology demos.

Written by

Michael Alba

Michael is a senior editor at engineering.com. He covers computer hardware, design software, electronics, and more. Michael holds a degree in Engineering Physics from the University of Alberta.