How will Autodesk use AI for product design?

A conversation with Stephen Hooper

Stephen Hooper, vice president of Software Development, Design & Manufacturing at Autodesk

Earlier this year, we met with Stephen Hooper, vice president of Software Development, Design & Manufacturing at Autodesk, to discuss how Autodesk will use AI for mechanical design. It is part of an ongoing project to determine how all the major CAD and CAE companies will employ AI — if at all. So far, everyone has said they would use AI in some form. More than anyone else at Autodesk, Hooper oversees the road map for Autodesk’s product design software, such as Fusion and Inventor.

This follows our interview with Mike Haley, Autodesk’s Senior VP of Research, who is laying the groundwork for AI across all of Autodesk’s industries (design and manufacturing, AEC, and entertainment). See part 1 of our conversation with Mike Haley here.

The following article is mainly based on our conversation.


We find Hooper eager to talk about AI and its implementation in future versions of its products, which is unusual since most software companies have been reluctant to discuss features of products yet to be released. We attribute this to the desire to show that they are answering the loud call for AI after the hysteria over ChatGPT.

Engineering.com: Would you clarify your role at Autodesk and how you work with Mike Haley?

Stephen Hooper: Mike is working on definitions of foundational large language definitions for the whole company. He’s not responsible for product creativity in the different industry verticals. My job is to work with Mike and figure out how he might build generalized services and help him understand market opportunities from a manufacturing perspective and define them so that we can prototype, test, and then subsequently build production-ready code based on his generalized services in Fusion. Fusion is our target tool of choice for manufacturing obviously (for reasons I can explain in a moment). Then Mike worked on building a generalized learning model, and my team worked on implementing it on the client side so that it would show up as functionality in the product.

Drawing automation in Fusion automatically creates drawings for each part in an assembly, including all views and much of the annotation. This picture is from an Autodesk video in YouTube .

Engineering.com: So, when we see the next release of Fusion, will we see any AI?

Hooper: We’re introducing what we call drawing automation. It’s a drawings automation service. It will take any assembly, break it down into its parts, then take each one of those parts, create the layouts, scales… It does all the view orientations and annotates all the drawings for you. It’s probably easier to show you than it is to describe. I can give you a rapid overview of it. Here is a base plate from a machine. [Refer to the image above.] Fusion will pick out all the individual parts and create drawings. The one part at an angle [part 4 above] is correctly oriented in the drawing. It does all of this on the cloud, not on the client side. You choose a couple of critical dimensions, and AI will make all the rest. It has scaled and annotated the drawing for you. There are still some ways to go with this, such as geometric tolerancing and dimensioning. But already, it will count the parts [for a parts list]. But rather than jump ahead and do all the dimensions, because, as you know, you can interpret a drawing in different ways, it gives you options. On the side, you see the optional ways it can be dimensioned. You can have some basic dimensioning to fill in the rest yourself. Maybe you like a particular dimensioning style, but you want to reduce the dimensions’ density. You can interact with it. It’s not perfect yet. It does an excellent job of producing all the drawings and laying them out. It’ll do balloons, bills of materials, annotations, baseline dimensioning, ordinate dimensioning… It won’t do things like drawing notes or geometric tolerancing and dimensioning. Those are sophisticated areas. We’re working on them.

One of our next steps is to use a large language model for the notes in the drawing rather than train a specific model on things like annotations.

There’s still work to be done with automated drawing. The areas circled are where leader lines would have been broken. The picture is from an Autodesk video: https://youtu.be/3DkUlbgnw-8?feature=shared.

The other thing we’ve done is to instrument the product. We instrumented because even if you train a large model on vast data, you still don’t get things perfect. It would help if you refined the model iteratively. In this example, one of the leader lines crosses another leader line, so where it says two by six by 20 holes [see circled area in the image above], it’s crossing the leader line with the text annotation [marked in red in the image above]. The user would manually drag that dimension after we generate it. Because we’ve instrumented the product, we will see that the user has made that amendment, and then we will retrain the model so the user no longer has to drag the dimension. That will happen over some time. This way, the product will get more and more refined as more and more people use it.

Engineering.com: Can you talk more about using customer data? Is that not proprietary?

Hooper: That’s an important aspect to consider. First of all, you’ve got to have access to a lot of unique data. Otherwise, you might as well license something. In the case of drawing notes, there’s no point in building a large language model. We might as well license OpenAI and build on top of it with specific training on technical documentation. But if there’s something specific to your industry, like the actual drawing layouts themselves, then Autodesk is in a perfect position. We have vast internal data on drawing specifications, standard styles, layouts, etc. Plus, we can instrument our product to learn from users’ interactions. That is not proprietary data we are generating. We’re not learning from somebody’s design, which I call horizontal AI. This is vertical AI that helps augment a user’s design, creativity, and innovation. We are very sensitive about data protection.

Let’s use Grammarly as an example. That’s what I call horizontal functionality. Grammarly is not AI-based but updates its algorithms based on your use. No one would care that Grammarly learns from your grammar, spelling, or accuracy. You don’t care about that. Especially as a writer, you would care if Grammarly started producing articles for other people based on your IP and the creative content you’ve created. The same thing is valid here. No one cares if we train a model based on drawing standards because everyone works on drawing standards. It just has productivity benefits without IP issues. This is an excellent area for us to apply machine learning.

Continued in Part 2.