SmartUQ: Uncertainty Quantification for more realistic engineering and systems analysis

Bruce Jenkins, Ora Research

SmartUQ is a software tool for uncertainty quantification (UQ) and engineering analytics that heightens fidelity of engineering and systems analysis by taking account of real-world variability and probabilistic behavior.

UQ is the science of quantifying, characterizing, tracing and managing uncertainty in both computational and real-world systems. UQ seeks to address the problems associated with incorporating real-world variability and probabilistic behavior into engineering and systems analysis. Nominal—that is, idealized—as opposed to real-world simulations and tests answer the question: What will happen when the system is subjected to a single set of inputs? UQ moves this question into the real world by asking: What is likely to happen when the system is subjected to a range of uncertain and variable inputs?

UQ got its start at the intersection of mathematics, statistics and engineering. Drawing together knowledge from each of those fields has yielded a family of system-agnostic capabilities that require no knowledge of the inner workings of a system under study to make predictions about its likely behavior. A key strength of UQ methods is that they require information only about the system’s input/output response behavior. Thus, a method that works on an engineering system may be equally applicable to a financial problem that exhibits similar behavior. This makes it possible for many different industries to benefit from advances in UQ.

Why UQ?

Uncertainty is part of every system. It can arise from variations in measurement accuracies, material properties, use scenarios, modeling approximations and unknown future events. Uncertainty in model boundary conditions, initial conditions and parameters adds to the challenge of determining whether a design meets all its requirements and whether it is optimal.

Sources of uncertainty. Source: SmartUQ

Most simulations are deterministic: the simulation response(s) are provided based on a given set of model inputs. Often, the engineering design effort will attempt to account for uncertainties indirectly, by using extreme model initial or boundary conditions and/or material properties. Simulation results obtained from these input conditions are then compared with criteria derived from a legacy of physical test data.

However, the practice of using extreme model conditions in this way may well fail to model reality with fidelity, and can easily overlook and omit various sources of uncertainties. Moreover, by not accounting for simulation uncertainties, the next steps may be difficult to decipher, as there can be numerous reasons for lack of agreement between simulation results and legacy test-based criteria.

UQ: Probabilistic, not deterministic

In contrast to that deterministic approach, UQ is a probabilistic approach that systematically accounts for sources of simulation uncertainties. That approach makes it possible to devise corrective actions when simulation results don’t agree with physical test data.

UQ methodology for statistical calibration. Source: SmartUQ

UQ methods are rapidly being adopted by engineers and modeling professionals across a wide range of industries because they can solve previously unanswerable questions. UQ methods make it possible to:

  • Understand the uncertainties inherent in almost all systems.
  • Predict system responses across uncertain inputs and quantify the confidence in the predictions.
  • Find optimized design solutions that are stable across a wide range of inputs.
  • Reduce development schedules, physical prototyping costs and unexpected product failures in use.
  • Implement probabilistic design processes.

Why now?

As computational resources have become dramatically more available and affordable, and simulation and testing have grown increasingly sophisticated and revealing, it has become possible and feasible to accurately predict the behavior of more and more real-world system designs. Today, the frontier of engineering design has advanced to rapidly predicting the behaviors of systems when subjected to uncertain inputs. Traditional UQ methods such as Monte Carlo methods require generating and evaluating large numbers of system variations, thus becoming computationally too expensive to apply to large-scale problems. More recent methods, such as those incorporated in SmartUQ, have made UQ easier to apply to small system designs, and more feasible and affordable to use on large ones. “There’s never been a better time to start including uncertainty in your engineering process,” the company observes.

Sources and types of uncertainty

Uncertainty is an inherent part of the real world, SmartUQ notes. No two physical experiments ever produce the exact same output values, and many relevant inputs may be unknown or unmeasurable. Uncertainty affects almost all aspects of engineering modeling and design. Engineers have long dealt with measurement errors, uncertain material properties and unknown design demand profiles by including safety factors and extensively testing design prototypes. But deeper understanding and quantification of the sources of uncertainty will yield step-function gains in fidelity and quantified confidence of decision-making.

Uncertainties are broadly classified into two categories: aleatoric and epistemic.

  • Aleatoric uncertainty is uncertainty that is beyond current ability to reduce by collecting more information. Thus, it may be considered inherent in a system, and parameters with aleatory uncertainty are best represented using probability distributions. Examples are the results of rolling dice or radioactive decay.
  • Epistemic uncertainty is uncertainty resulting from lack of information that could theoretically become known, but that is not currently accessible. Thus, epistemic uncertainty could conceivably be reduced by gathering the right information, but often is not because of the expense or difficulty of doing so. Examples include batch material properties, manufactured dimensions and load profiles.

Common uncertainty sources in simulation and testing

Any system input including initial conditions, boundary conditions and transient forcing functions may be subject to uncertainty. These inputs may vary in large, recordable but unknown ways. This is often the case with operating conditions, design geometries and configurations, loading profiles, weather, and human operator inputs. Uncertain inputs may also be theoretically constant or follow known relationships but have some inherent uncertainty. This is often the case with variations in measured inputs, manufacturing tolerances and material properties.

Uncertainties in simulation and testing appear in boundary conditions, initial conditions, system parameters, and in the system’s models and calculations themselves. They fall into four categories:

  • Uncertain inputs.
  • Model form and parameter uncertainty.
  • Computational and numerical error.
  • Physical testing uncertainty.

Uncertain inputs—Any system input including initial conditions, boundary conditions, and transient forcing functions may be subject to uncertainty. These inputs may vary in large, recordable, but unknown ways. This is often the case with operating conditions, design geometries and configurations, loading profiles, weather, and human operator inputs. Uncertain inputs may also be theoretically constant or follow known relationships but have some inherent uncertainty. This is often the case with measured inputs, manufacturing tolerances and material property variations.

Model form and parameter uncertainty—Every model is an approximation of reality. Modeling uncertainty is the result of assumptions, approximations and errors made when creating the model. This can be further broken down into model form uncertainty—uncertainty about the model’s ability to capture the relevant system behaviors—and uncertainty about parameters within the model.

Using gravity as an example, the Newtonian model of gravity had errors in the model form that were corrected by general relativity. Thus, there is model form uncertainty in the predictions made using the Newtonian model of gravity. In addition, the parameters of both these models, such as gravitational acceleration, are subject to uncertainty and error. This uncertainty is often the result of errors in measurements or estimations of physical properties and can be reduced by using calibration to adjust the relevant parameters as more information becomes available.

Computational and numerical uncertainty—To run simulations and solve many mathematical models, it is necessary to simplify or approximate the underlying equations, and this introduces computational errors such as truncation and convergence error. For the same system and model, these errors can vary among different numerical solvers, and are dependent on the approximations and settings used for each solver. Further numerical errors are introduced by the limitations of machine precision and rounding errors inherent in digital systems.

Uncertainty in physical testing—In physical testing, uncertainty arises from uncontrolled or unknown inputs, measurement errors, aleatoric phenomena, and limitations in the design and implementation of tests such as maximum resolution and spatial averaging. These uncertainties result in noisy experimental data, and can necessitate replication and reproduction of scientific experiments to attempt to reduce the uncertainties in desired measurements.

 

UQ Inverse Analysis solutions. Source: SmartUQ


UQ puts “error bars” on simulation results

One of the primary objectives of running simulations is to resolve critical programmatic issues in complex systems. This requires a high degree of confidence in the relevance of simulation results to the real world. Unfortunately, the bottom line is you don’t know how good the simulation results are without quantifying their certainty. Uncertainty quantification effectively provides “error bars” on the simulation results.

For engineers, the benefit of UQ is to become better aware and informed of the uncertainties present in the simulation results when using them to make critical design decisions. More informed decision-making leads to better product development outcomes.

Special thanks to SmartUQ for providing information for the article.

SmartUQ LLC
www.smartuq.com