The Turing GPU Architecture and NVIDIA’s RTX Graphics Cards
Michael Alba posted on March 01, 2019 |
Die of the new NVIDIA Turing TU102 GPU. (Image courtesy of NVIDIA.)
Die of the new NVIDIA Turing TU102 GPU. (Image courtesy of NVIDIA.)

Last summer, graphics powerhouse NVIDIA announced Turing, its latest GPU architecture. Turing, the company claimed, was a big leap forward in graphics technology. NVIDIA CEO Jensen Huang put it this way: “Turing is NVIDIA’s most important innovation in computer graphics in more than a decade.”

Why all the fuss? The headline feature of Turing is a new type of processing unit called an RT Core, which accelerates a rendering process known as ray tracing. Ray tracing produces incredibly photorealistic images, but it’s a computationally expensive technique. The promise of Turing’s new RT Cores is their ability to perform ray tracing in real time.

“The arrival of real-time ray tracing is the Holy Grail of our industry,” Huang said.

In this article, we’ll explore the new Turing architecture and the promise of real-time ray tracing.

What Else Is New?

Diagram of the Turing TU102 architecture. (Image courtesy of NVIDIA.)
Diagram of the Turing TU102 architecture. (Image courtesy of NVIDIA.)

Turing, the successor to the Volta architecture, is more than just the new RT Cores. The new architecture also comes with new shading features, updated memory, improvements to machine learning capabilities, and several other enhancements.

For example, Volta introduced Tensor Cores, a type of processing unit dedicated to the matrix operations necessary for machine learning applications. In Turing, Tensor Cores have been enhanced with new INT8 and INT4 precision modes. Furthermore, NVIDIA has launched a new deep learning framework called NVIDIA NGX that allows developers to access pre-trained neural networks for enhanced graphics.

Another benefit of Turing is that it swaps out Volta’s GDDR5X video random access memory (VRAM) for the latest generation, GDDR6, which promises 20 percent improved power efficiency over its predecessor. Turing also adds support for the new VirtualLink virtual reality (VR) standard. VirtualLink is an open standard that uses a single USB-C port for connecting to head-mounted displays (HMDs) for VR. The standard is supported by major VR players like Oculus, HTC Vive, Microsoft and NVIDIA, among others.

Turing also has a redesigned streaming multiprocessor (SM) that NVIDIA claims has 50 percent better shading efficiency per CUDA Core compared to the previous architecture. Let’s break that down: CUDA Cores are the third type of core in Turing, and they perform the main graphics processing calculations for the GPU. The SM is essentially like a factory, and CUDA, Tensor and RT Cores are the three types of workers in that factory.Put a bunch of these factories together, add in some memory and a few other odds and ends, and you’ve got your GPU.

Now, a quick word about the graphics cards powered by Turing GPUs. NVIDIA’s latest generation of graphics cards all use the Turing architecture, but the name applies only to the architecture itself. The graphics cards themselves have a different name: RTX, short for ray tracing. The RTX name applies to the new GeForce RTX series, the Quadro RTX series and the Titan RTX (the final Turing-based graphics card, the Tesla T4, managed to escape the moniker). Since NVIDIA named nearly a whole generation of graphics cards after ray tracing, let’s give the process a closer look.

Real-Time Ray Tracing

The GPU’s job is to take information about a 3D scene, like the positions of a bunch of polygons, and turn it into a 2D representation suitable for your computer monitor. Traditionally, this is accomplished through a rendering technique called rasterization. Rasterization involves a series of calculations to account for position, lighting, color and texture information for each pixel on the screen.Today’s GPUs employ highly efficient rasterization algorithms that allow complex scenes to be refreshed dozens of times per second, such as in gaming applications, for example. Rasterization can produce high quality images, but it performs poorly when rendering visuals like reflections and shadows.

The alternative to rasterization is a rendering technique called ray tracing. This technique is so-named because it’s based on the physics of light rays. In an analogy to how our own eyeballs collect information about the 3D world around us, ray tracing uses simulated light rays to collect information about a 3D scene. Typically (and unlike our own eyeballs), these rays are projected from the camera onto the scene, where they bounce around off different objects until they hit a light source (or don’t). Each ray and each bounce give valuable information about how the scene should look.

Illustration of ray tracing. (Image courtesy of Wikipedia user Henrik.)
Illustration of ray tracing. (Image courtesy of Wikipedia user Henrik.)

Because of its basis in physical reality, ray tracing produces extremely photorealistic images, with shadows and reflections intact. However, because of the high numbers of simulated light rays required to achieve this level of photorealism, ray tracing requires a lot of computation, and a lot of computation requires a lot of time. For this reason, ray tracing is used only in applications that can spare this amount of time. Big budget blockbusters can use ray tracing to render their visual effects, but video games, which need upwards of 30 renders per second, can’t afford to wait around.

This is why NVIDIA is so excited about the Turing architecture and the new RTX graphics cards. With the RT Cores accelerating these ray tracing calculations, the technique can now be performed in realtime. It can be used in video games for better quality graphics; it can be used in CAD or architectural renders for quicker, better results; it can enable more photorealistic VR experiences; it can cut down on rendering time for filmmakers—and the list goes on.

Comparison of a screenshot from EA’s Battlefield V with (bottom) and without (top) RTX ray tracing. Note the photorealistic reflections in the RTX-on frame compared to the absence of reflection in the RTX-off frame. (Image courtesy of NVIDIA.)
Comparison of a screenshot from EA’s Battlefield V with (bottom) and without (top) RTX ray tracing. Note the photorealistic reflections in the RTX-on frame compared to the absence of reflection in the RTX-off frame. (Image courtesy of NVIDIA.)

It should be noted that real-time ray tracing is still in its early stages. While NVIDIA’s put the hardware in place, it’s still up to developers to make use of ray tracing in their applications. Whether in video games, CAD software, rendering applications, or simulation programs, developers will need to update their software to reflect the new possibilities of this technology. Ultimately, this is what will enable engineers, architects, CAD designers and gamers to see the value of real-time ray tracing.

A Deeper Dive into Turing

In this article, we’ve introduced some of the main highlights of the new Turing architecture, including its machine learning enhancements, updated memory, improved efficiency and, of course, real-time ray tracing.But there’s a lot more to be said. If you’re interested in taking a deeper dive into the Turing architecture and the new RTX graphics cards, engineering.com recently published a research report: The New Turing Architecture and RTX Graphics Cards.

The research report covers:

  • The Turing architecture in more detail
  • Turing’s machine learning capabilities and the NVIDIA NGX deep learning framework
  • Turing’s advantages in the AEC and M&E industries
  • Rendering, virtual reality, artificial intelligence and simulation applications using RTX cards
  • Whether it’s worth upgrading to RTX graphics cards

To learn more about the Turing architecture and RTX cards, you can read the report here.


Recommended For You