NAFEMS’ top trainer gives advice for all CAE users.
Tony Abbey remembers the struggles of learning simulation analyses back when he started in 1976. That’s one reason why he’s been the training manager at NAFEMS since 2007: it enables him to ease the learning curve of the next generations of simulation expects. But Abbey’s experience doesn’t end with NAFEMS’ in person and online courses. He’s also worked in the aircraft industry, and for simulation software companies such as MSC and NEi software.
Along the way, Abbey’s seen many simulations and has learned a lot about what can go wrong and how to avoid them. Engineering.com sat down with him to learn the top five dos and don’ts he teaches his students. Here is what was learned.
Do understand the fundamental physics of the problem.
Abbey’s first, and perhaps most important, suggestion is to understand the underlying physics of any problem you need to simulate. He says, “Simulation is all about trying to [model] the real world, so you have to have some idea of what’s involved in that real world application. [With] physics, if you don’t have a grasp, it can lead you in the wrong direction.”
For instance, linearity is a simplification often added to simulations to make them easier to use and less computationally expensive. Abbey notes that the world is very non-linear and assuming the opposite can lead to nonsensical results. If a model is limited to assessing only linear situations, then it will not correlate to the real world once the system moves past this region. When this happens, it’s best to choose a new model.
Abbey isn’t against simplifying models. In fact, it’s often a requirement to make a simulation. “You try to simplify the real world, [because] you can’t take it all on board,” said Abbey. “The level you simplify … due to the scope of the analysis and computational capabilities, will affect [your results]. [You] need to understand the limitations of the squeezed-down view of the world and [the] big problems [that can arise] if you assume the wrong things.”
For example, Abbey once worked with a consultant to model the heat flow of a cooling fan. One individual tried to connect two simulation models into a co-simulation without taking into consideration how the system would work physically or attempting to baseline the results. “It was scary,” Abbey said. “Lots of pretty pictures, but it wasn’t based in reality.”
Do what you can to keep computations to a minimum.
With development cycles shrinking, engineers need to quickly iterate their designs to optimize products in time for launch. This entails simplifying simulations for speed. But Abbey notes that you must simplify knowingly. “You can’t just cut off a chunk of an analysis,” he says. “You break it down to the basic physics. Then you can start to explore [how it can be simplified].”
At the same time, “you don’t want to be the analyst at the meeting saying your model needs two more days to compute when you are asked for answers now,” says Abbey. “If they want answers within two to three days you can have a [simplified] model to build up information and then run the big complex model in parallel to get a general sense when asked at the meeting.”
It’s a balancing act. You need to maximize the quality of the results based on the computational resources available and the turnaround time needed for the simulation. In other words, according to Abbey, “if you commit to an overcomplicated model, you can risk the project.”
Abbey suggests one method of simulation simplification is to assess the mesh. To optimize here, guess a mesh density (regionally or globally) based on the computational resources. Then mesh the model and see how those resources react when running the simulation. Then refine the mesh density accordingly until you find “your threshold of pain,” he says. “Maybe you want results in a minute for a sanity check and [then] go on to the next design. For analysists, maybe it’s 30 minutes. Determine what was the design you were able to squeeze through that computational resource. Then you can use this as a benchmark based on if you want to get an answer for a given timeframe.”
Abbey clarifies that if you “need a quick assessment you do a quick and dirty model, and then for plasticity and fatigue you will look at a really refined mesh. Do the simplification with the benefit of why you care about the results.”
Do research into the orders-of-magnitude to expect from inputs and outputs.
When setting up a physics problem it’s important to know and understand the inputs, such as the orientation of planes, levels of damping, forces in play and more. These values should all be based on your understanding of the physics within the problem. For the output, engineers also need ways to verify that the results they get make sense.
“You need to do sanity checks,” says Abbey. “If you [make] an FEA [simulation] you need to check the deflection based on real world [numbers] and engineering judgement. A sense of scale is needed.”
He continued with an example. When performing a linear stress analysis to determine the maximum stress, if the value you get is three times the yield, then you either made a wrong input or there is something funny going on. “Drill down and see [if] the numbers look crazy,” is a form of verification that must be performed.
To get a sense for the numbers, Abbey notes that it’s important for engineers to understand test and field data. This may require getting to know the test, quality and operations teams to see what numbers they get in the field. This way, “anything that is unusual looking is a red flag,” says Abbey. “You won’t be doing a PhD thesis on everything you simulate. So, [typically] you are not going to be exploring new phenomena. [Coworkers] offer the best experience to head off these issues. So have a coffee or beer with the testers and designers.”
Don’t take results at face value.
Even after understanding the orders-of-magnitude for your results and getting a sense of the kind of numbers you will see, it’s still not a good idea to take results at face value. The values must still be tested and validated against physical tests to see if the simulation acts as expected.
“Analysts should always be skeptical,” says Abbey. “Once you get a warm and fuzzy feeling on the results you get comfortable, but you [should] never get complacent. The results are guilty until proven innocent.”
Abbey notes that he’s seen many instances where the results from the simulations seem fine but then physical tests go badly. “There is always some forensics to find where the assumptions were wrong and that might bite you.”
He explains that we’re all human and we all make mistakes, so lay bare all your calculations until each step is clear and apparent. “Document carefully what you did and your assumptions, then as you lay out that framework go back to [assess] where you went wrong. Keep a logbook of [your steps] and the assumptions you made as memories are fallible. You need to be able to backtrack so some level of note taking is important, [otherwise] it’s hard to debug on the fly.”
It’s also a good idea to run many similar simulations to ensure you have confidence in the inputs and the results. For instance, small parameters can have big effects. By running a sensitivity analysis with different variables, you can predict what will go wrong during physical and digital testing.
Abbey says to verify that the right numbers have been input into the right models with the right materials. Then validate it with data from the real world. “Build up knowledge of the simulation and test perspective together to get a final article.”
Don’t expect others to believe the results without explanation.
Once you’re confident in your results, don’t expect others to immediately follow suit. “They are being sensible,” says Abbey. “They will have a skeptical view if the simulation report is flimsy and only pretty pictures. The report is how you convince yourself and others. If it’s flimsy and has no indication of [the] methodology, verification and validation techniques, then it can be problematic.”
It’s also important to know who the report is aimed at. For instance, an executive summary might go to management and Abbey notes that this will likely be a bare bones document indicating if something passes or fails. Any explanations in that document will need to be packaged in laymen terms so they make sense to management. However, if the report is going to a certification authority, then you need to thoroughly walk the certifier down the path you took throughout the scope of the analysis.
“Help get them inside your way of thinking,” says Abbey. “A full certification should have appendices and data that you don’t expect everyone to look at, but they can. You want to get the certifier on your side. So, walk them through step by step and explain your justification or where it came from. Lay it out in baby steps.”
For more tips on learning simulation, read: 5 Traps to Avoid When Using Simulation for Product Development.