The field of computational mechanics is undergoing a quiet revolution with the arrival of various machine learning (ML) and reduced order modeling (ROM) technologies.
Hexagon has submitted this post.
Written by: Kambiz Kayvantash, Sr. Director of ML/AI Solutions for Design and Engineering, Manufacturing Intelligence Division, Hexagon
The field of computational mechanics is undergoing a quiet revolution with the arrival of various machine learning (ML) and reduced order modeling (ROM) technologies. The idea to combine data-based and partial differential equation (PDE)-based physics is possible thanks to the availability of various sensing and imaging technologies as well as computing resources. These are enabled by data mining and machine learning techniques that have been available since early 1950’s.
These achievements are accompanied by the capabilities of reduced order modeling, which originated in the 1980’s and enables data compression and the encapsulation of large discretized PDE models—life finite element analysis (FEA). Finally, the arrival of various domain decomposition and sub-structuring techniques within the FEA community, and the associated multi-processing technology, also contributed greatly to both fields—contributing to the emergence of the recent unified approaches to ML exploiting both available data and solver solutions.
This apparent revolution will likely peak in the near future, due to the availability of combined cloud services allowing for the creation of huge databases containing model-based physics and numerically performant and computationally efficient solutions combining ML and ROM.
From the computational point of view, three topics are of major importance:
- How to generate sufficient data to allow for the establishment of models which represent the underlying physics.
- How to identify sub-structures, or more generally how to efficiently decompose models into their components, or modes, to allow disassembly of the models for efficient learning, as well as eventual re-assembly of the responses with sufficient efficiency and precision.
- To devise efficient and economic sampling techniques which provide not only the best learning data set, but also the smallest.
In this article, we explore the above topics through a study of recent advances reported in the industrial FEA community, including linear and nonlinear (implicit and explicit) mechanics, and demonstrating how various ML techniques (supervised, non-supervised and reinforcement learning) and ROM techniques (POD, FFT, CLUSTERING) can contribute to more efficient computational technology.
How Does ML/ROM Work?
In principle, ML and ROM techniques are interpolation methods that exploit data sets derived from existing virtual or experimental setups. They are essential to the concept of Digital Twins, since they provide the missing link for both rapid re-design and operational evaluations in real-time. While the common starting point is a DOE-type design with sufficient space filling properties, ML/ROM techniques different from response surface methods (RSM).
In RSM, the approximation functions (surrogate model) often represent the effects of variations of variables on a single scalar static function. Additionally, this approximation is dependent on the nature of the fitting function (or surface), or on a prescribed equation which affects the number of runs required. Every different assumption on the nature of the approximation establishes its own requirements on the size of the sampling data.
In contrast, ROM techniques exploit the known physical behavior represented by the “modal” contents, or the discretized PFE’s expressed in terms of combinations of “decomposed” series of responses or clusters of responses. These may be combined with various interpolation techniques such as kriging or radial basis functions, or even simple regression, as well as ML prediction algorithms such as deep learning and multilayer perceptron (MLP). These are not necessarily aware of the nature of the problem, but instead are based on trial-and-error and reward distribution strategies.
Whatever the nature and formulation of the method being used, ML and ROM provide a series of alternative solutions to costly finite element models (FEM) and their variants. These techniques provide completely new frontiers for cost effective and accelerated design, as well as optimization of products or processes.
Applications
Let’s consider some cases where FEM solutions remain expensive and particularly prohibitive for parametric studies, optimization and any other iterative process. Nearly every single simulation type or PDE of a physical problem may be considered. Among the most common cases we will discuss below are structural non-linear analysis (implicit or explicit) and CFD analysis of the optimal layout of wind turbines, simply because these two simulations are particularly costly to conduct.
FEMs of crash-safety situations involving dummies or human bodies have been available commercially for over two decades, and are now beginning to be employed in practical design applications. Nearly every large displacement or large deformation simulation may be considered, and they provide valuable insight both in terms of responses of human bodies subject to external mechanical loading as well as visualization of the kinematics and contacts during the loading.
Wishing to improve the response predictability of these models—and encouraged by the availability of advances in data acquisition solutions—developers are creating finer and finer meshes. This inversely contributes to the robustness as well as the computing performance, storage and CPU requirements of these models.
Because of this trend, while highly non-linear analysis models are available for analysis and design of singled-out reconstructions of simple scenarios, they remain impractical for clustered or population statistical studies which require stochastic models and loading scenarios which need to be launched thousands to millions of times. This is a major constraint shared by studies concerning optimization, robustness, sensitivity or indeed any “on-board” modeling, since they all require many repetitions of the simulation configurations or real-time performance. In this respect, reduced order modelling is a clear challenger—and even a winner for all iterative analysis.
The principle is simple: First, use an FEM sampling technique to create and run a few parametric versions of the initial model. Second, use decomposition techniques to reduce the complexity of the output. Third, use interpolation techniques (ML, RBF, Kriging, etc.) to predict the response for new parameter settings (instead of using FEM for calculation). Finally, re-compose the “modal” responses and solve for the complete prediction of the original FEM.
The following examples demonstrate the full advantages of the combined ML/ROM approach for parametric and shape or layout optimization of crash or non-linear analyses.
In the first example, different wheel shapes with different numbers of spokes are tested against a curb impact scenario. Combining images of the wheels and the crash impact curves, we can predict the optimal case shape and parameters with very high precision.
The second application concerns a multi-physics application requiring communication of models solved at different spatial and temporal scales via FEM or FVM. A fluid structure interaction problem may help us to describe the advantages.
Reduced models of large, time dependent and CPU-consuming applications may involve multi-physics applications, such as the Arbitrary Lagrangian Eulerian (ALE) method for fluid-structure interactions. The models’ ROMs have the advantage of being re-employed in a “solver independent” environment as sub-parts or components of a system without the need to be recomputed at every cycle. They can simply be reconstructed from previous results. The computation time is improved, and off-the-shelf models may be optimally exploited in a wide variation of scenarios, including on-board computing. A major advantage of this approach is that a reduced model generated by any finite element code could easily be used for any other commercial code, since the ROMs are interchangeable among solvers and PDEs.
From a mathematical point of view, a reduced model approximates the initial governing equations based on decomposition and sub-structuring techniques. In its application, these are like response surface techniques representing surrogates of the original model. However, contrary to surrogate or response surface models, the results of a reduced model are not only based on an initial “static” grid but are “reconstructed” from time dependent functions obtained from full resolution of partial differential equations (PDE). Simply speaking, instead of solving the original PDE, we solve a reduced or simplified version of it, based on the knowledge obtained from existing previous solutions.
This allows for the creation of a single ROM in the following example, where it is used frequently and sequentially or in parallel. Any arrangements of parametric studies are easily assumed and optimized with respect to the wind turbine design parameters or environmental layouts.
For example, a single wind turbine is sampled below for different parameters, allowing for the creation of its ROM which may subsequently be used for optimal design.
Conclusion
The above examples demonstrate clearly that design and optimization engineers today may employ a new set of tools halfway between FEM models and traditional tables allowing for real-time or embedded analysis and decisions to be made. Optimization and stochastic analysis of any complex model becomes a reality, and may be systematically used for improving designs, and minimizing costs and environmental impacts. Finally, the ML/ROM models are the core of the “digital twin” concept and solutions, since they allow for integration of machine learning and predictive analysis based on both observed and simulated data.
To learn more, visit MSC Software’s website.