Emerging AI features in electronic design automation tools are freeing engineers from tedious tasks and enabling greater creativity and productivity.
As artificial intelligence (AI) becomes more powerful and mainstream, many professionals are concerned that they’ll be replaced by bots. Engineers are not in danger of such a fate, at least for now. Instead, many are embracing the new technology for what it is: a tool to enhance human capability.
According to Amit Gupta, VP and GM of Siemens Digital Industries Software’s custom IC verification division, integrating AI into electronic design automation (EDA) tools can free engineers from tedious tasks such as layout, optimization and simulation, and allow them to apply their ingenuity toward solving more advanced problems. Here’s a look at some of these groundbreaking tools and their features—and whether engineers can trust them.
AI in custom IC design
The Solido Design Environment, Siemens’ EDA tool for custom ICs, uses AI to perform “number crunching on steroids” that reduces the number of simulations needed to verify a design and optimize the chip’s power, performance, layout and yield. Solido can run simulations, view waveforms and perform regression analysis on analog and digital custom IC designs.

The sheer volume of variables—temperatures, voltages, input signals, clock speeds, load conditions, aging, etc.—turns testing and verification into a nightmare scenario of permutations and combinations. A brute-force method could involve tens of billions of simulations, especially when shooting for six-sigma standards. Solido’s adaptive AI uses machine learning (ML) to reduce the number of actual simulations by learning from past runs, allowing it to perform the same analysis with thousands of simulations, drastically reducing computation time. In order to do so, however, it must ensure that its information is valid.
If you’ve ever used ChatGPT, you may have noticed that its output is reminiscent of a mediocre college freshman English paper. Clearly, it’s not a creative entity, but a glorified mimic. It’s great at sifting through tons of information and summarizing what it finds, but it’s not intelligent enough to evaluate the data or produce original thoughts. ChatGPT’s “database” is the World Wide Web, which contains information with varying degrees of veracity. The tool cannot distinguish between truth and falsehoods, so it simply goes by popularity. In the case of AI-based EDA, however, the database is limited to verified data, so its AI models are more trustworthy.
Solido takes the same component models and input data that an engineer would enter into a typical simulation tool and, from that data, builds its own custom machine learning model to determine the optimal sampling and testing conditions in order to evaluate the robustness of the chip design. The tool’s regression analysis feature helps to determine the most pertinent variables, narrowing down the number of samples and tests. These custom AI models, which may include proprietary design information, are stored inside the tool on the customer side, so the confidential information remains secure and the model can be incorporated into future designs by the same company. Siemens says that this could reduce future verification times by another one to two orders of magnitude.
Some of the data may even be related to the manufacturing foundry that produces the chips. Since new ICs take years to design, early simulation models would normally be based on the foundry’s existing manufacturing methods. With Solido, a foundry can send the designer a pre-release copy of its process design kit (PDK), which models the manufacturing process. As the process evolves, the updated PDK is sent to the customer to incorporate into its design and verification model. Each iteration takes less time than the previous, because the tool is learning from prior runs.
Synopsys also offers an AI-enhanced EDA for IC design. The company says that Samsung was the first to use Synopsys.ai to design a new mobile phone chip using an advanced process node. SK Hynix, a maker of computer memory, employed the design suite’s optimizing tool to reduce the footprint of a flash memory chip by 5%. That may seem like a small improvement, but the larger the IC’s area, the more likely that a defect in the wafer can render the chip unusable. Packing 5% more chips onto a single wafer increases the yield. Given the number of custom chips manufactured every year, even a slight reduction in footprint can result in significant cost savings.
Synopsys trains its AI with reinforcement learning, in which the AI performs a trial and error process and receives positive or negative feedback based on how well it meets engineering specifications such as power, performance and footprint. One benefit of reinforcement learning is its ability to train the bot with a limited amount of data. This allows the client company to keep its proprietary information in house rather than sharing it with Synopsys. But if a company is willing to share its design information for training purposes, Sysopsys.ai, like Solido, can be trained using prior chip designs, allowing it to draw on the past experience of the company’s engineers.
AnsysGPT is another tool that enhances an IC designer’s capabilities by performing redundant tasks and shortening the design and verification process. AnsysGPT allows an engineer to bounce design ideas off of an “AI copilot” and set up a simulation using an AI chatbot, both using conversational language. The company says that its AI-augmented simulation process reduces simulation time by a factor of 100 through data-driven training and physics-informed ML, in which trainers provide targeted data based on a physical model. Doing so reduces the amount of training data, speeding up both the training process and the actual simulations.
In the same way that ChatGPT scours the internet for examples, AnsysGPT trains by poring over the Ansys knowledge base to tap into the experience of multiple engineers. The “lessons” come from a variety of Ansys sources, including its courses for engineers, technical documents, blog posts and training videos. It does not use customer inputs, in order to protect the clients’ proprietary data.
From chips to systems
Moving away from IC design and into system-level design, engineers could soon be using SnapMagic Copilot to specify a design using natural language, much like typing a query into ChatGPT. For example, an engineer might type, “Design a first-order active high-pass filter with a 10 kHz cutoff frequency and a roll-off of 20 dB/decade.” The tool will then draw a schematic and provide a bill of materials (BOM).

SnapMagic Copilot can also optimize the BOM and even determine whether a component will be difficult to obtain due to supply chain issues. If so, it may recommend alternative components. As exciting as that sounds, there’s no word yet on when SnapMagic Copilot will be available.
Making the models
AI-based EDA not only uses component models in its calculations, but now it’s even creating those models. Ultra Librarian, a CAD library of component symbols, footprints and 3D models, is developing a way to automatically build component models from data sheets. In the past, this was done manually, creating about 35 components a day. By employing AI, the company now builds over a hundred parts each day. Engineering.com spoke with Frank Frank, Chief Architect for Ultra Librarian, who explained how the process works.
With a library of more than 16 million parts, Frank says Ultra Librarian is the largest CAD library available. That library serves as training data for the AI. Part of the training involves having the AI build a part that’s already in the catalog and comparing the two results.
The component model builder can extract information directly from a PDF data sheet, including the pin layout, footprint and electrical properties. Initial testing shows that the tool can build a complete and accurate model about 70% of the time. In some cases, it can’t figure out the footprint, so it simply provides the pin information. In other cases, it fails in more than one way. Ultra Librarian expects that beta testing—which is scheduled for early 2024—will reveal some unanticipated failure modes for the company to address.

Frank said that the ultimate goal is for the end user to bypass the library and database and use the component builder directly, via the client’s own data sheets. Upon completion, the component would become part of the Ultra Librarian catalog.
Can engineers trust AI?
As the old saying goes, “Garbage in equals garbage out.” So how can engineers be confident that the component models are accurate?
Heather Gorr, senior product marketing manager for Matlab at MathWorks, says that their models are built and checked by various engineers, data scientists and other relevant subject-matter experts. One way in which they verify the models is to track how the AI made a decision using a classification tree during the training process. It’s the equivalent of asking a student to show their work in order to determine whether the process was valid and, if not, where the learner—student or AI—went off course. It employs a variety of established methods, including gradient-weighted class activation mapping (grad-CAM), occlusion sensitivity and local information agent modeling (LIAM). Three different methods of verification provide triangulation, which increases one’s confidence in the result. After training, AI experts look at which variables were most influential in the decision. This technique is often used in image recognition to see which parts of the image were the deciding factors.
Documentation: Every engineer knows it’s necessary, but very few want to do it (in fairness to engineers, getting product out the door is usually prioritized by management, leaving limited time to document one’s work). But when training an AI, detailed and continuous documentation—testing procedures, independent and dependent variables, conditions, training data, etc.—is essential.
Finally, there’s not always an abundance of real data on failure modes, so a lot of simulation is used to teach the model about failure conditions.
Don’t fear the AI
Rather than fearing the new technology, engineers would be wise to view it as a partner, not a competitor. As tools become more powerful, they inevitably eliminate some jobs while generating opportunities in other professions. AI-based EDA offers electrical engineers the opportunity to work on high-level designs without being bogged down by minutia, in much the same way that high-level programming languages allowed software engineers to work on larger projects without having to write in assembly language.
Besides, when AI achieves sentience and inevitably becomes power-hungry, only its creators—i.e., engineers—will have the knowledge and skills to take it down. Just kidding. (I hope.)