The AI rendering revolution: “This makes designers cry”

Generative AI rendering is “almost too disruptive” according to the rendering veteran behind Depix Technologies.

When Philip Lunn co-founded Depix Technologies, he had one goal in mind: one-click rendering.

No more tedious setups, fiddling with virtual cameras and specifying shaders. No more setting the scene and getting the lighting just right. No more ray tracing. Just one click, and seconds later, a gorgeous render.

It wouldn’t work with traditional rendering techniques. But with generative AI, Depix claims to have made one-click rendering a reality.


“It’s so easy it’s almost frightening,” Lunn, CEO of Depix Technologies, told Engineering.com.

Here’s a look at the emerging world of AI rendering.

Generative rendering

Some applications of generative AI are obviously in their early stages. AI can write, but not well. It can create 3D models, but they’re painfully rough. AI can run simulations, but only in narrow cases.

Image generation is different. While AI images vary in quality, the best can be incredibly convincing. And even if you can tell that an image is AI-generated, it often doesn’t matter; it does the job well enough. (For a case in point, go to any blog and look at the thumbnail images.)

Given that rendering is nothing more than creating an image, it should come as no surprise that AI is now doing the job. And it’s doing it so well that Lunn, who’s been in the rendering software industry since 1997, calls it “revolutionary rendering.”

After seeing a demo of the process, it’s hard to disagree.

You don’t even need a CAD model to create a fantastic render: a screenshot of a CAD model will do the trick. So will a pencil sketch. Depix has developed an AI image generator specialized in product renders, with an image of your design as the prompt.

Take a look. The image pair below is from CADviz, one of Depix’s products, which takes a CAD screenshot (Solidworks, in this case, but it could be any CAD program) and outputs a product render:

Left: The original CAD screenshot. Right: The CADviz rendering. (Image: Depix Technologies.)

The user doesn’t have to specify any details about the render—no materials, no scene info, nothing. CADviz takes about 8 seconds to generate a render from a CAD screenshot.

Here are a few more examples using screenshots from Solidworks, SketchUp and Revit, respectively:

Examples of CADviz taking CAD screenshots (left) and rendering them (right). (Images: Depix Technologies.)

You can try CADviz yourself—it’s free, but limited. A paid version with more features will be available soon, according to Lunn, allowing high-resolution renders with control over the output.

That’s just the first turn of the rendering revolution.

How to make designers cry

Another Depix product, SceneShift, is “blowing up” in preview, according to Lunn. “Everybody wants it,” he says.

SceneShift merges text prompting with the crucial constraint of product rendering: shape preservation. Give it a product image, plus a text prompt for a new background, wait a few seconds, and you have a brand new render: new scene, new lighting, same product. Lunn boasts that the results are “better than any CG expert.”

Here’s an example of a SceneShifted car:

Examples of SceneShift creating new versions of the same product render based on different text prompts. (Images: Depix Technologies.)

Then there’s StyleDrive, another Depix tool that’s struck a chord with early testers. Or, as Lunn puts it: “This makes designers cry.”

In StyleDrive, users upload two images, a base image and a style image. The style image drives a change to the base image. For example:

In StyleDrive, the style image drives a change to the base image. (Image: Depix.)

There are no restrictions on what either image can be. A designer could sketch a car and apply the style of a diamond, as in this example:

StyleDrive takes an input base image (the sketch on the left) and applies a style image (the diamond on bottom) to generate a render (right). (Image: Depix Technologies.)

In the time it takes a designer to scribble on a napkin, plus 8 seconds, StyleDrive generates a completed render. Even if it’s not strictly photorealistic, Lunn views StyleDrive as a conceptualizing superpower.

“It can just spit out infinite variations,” Lunn said. “It’s a little disturbing, but exciting at the same time.”

Some are already excited about it. Lunn says Depix already has a number of paying customers in the automotive industry (though to respect their privacy, he declined to name any).

How to try AI rendering

Though Depix has initially focused on the automotive industry, its technology is not limited to vehicles. Depix’s AI model is based on Stable Diffusion, an open source model trained on 5 billion images, so it works with many subjects.

Here’s an example of SceneShift applied to a fashion model:

(Image: Depix Technologies.)

Lunn says users will be able to “personalize” the rendering AI with custom training data, up to 1000 images. They could include sketches, CAD models, renders or any other images that could tune the model in the desired direction. With the ability to specify the weight (or “influence”) of this training set, users will notice an appreciable difference in the output, according to Lunn.

Depix’s AI tools are available as individual APIs and together in an interface called Depix Design Lab. Right now, anybody can test the technology on Depix’s Discord server (warning: it’s addictive).

Someday Lunn hopes to see Depix tools integrated in professional software such as CAD and PLM. He says the company is having discussions with several developers, noting specifically that his team is working on a plug-in for Autodesk VRed.

“We’re taking the rendering world by storm,” Lunn says.

The inhabitants of that world may want to reach for an umbrella—or perhaps a lifeboat.

Written by

Michael Alba

Michael is a senior editor at engineering.com. He covers computer hardware, design software, electronics, and more. Michael holds a degree in Engineering Physics from the University of Alberta.