How Low-Code Platforms Power Insight-Based Decisions Across Digital Twins

From watching Mendix World 2020

(Image courtesy of Scratch Team/MIT Media Lab.)

(Image courtesy of Scratch Team/MIT Media Lab.)

Digital twins help design, simulate, operate, maintain and optimize their physical twin counterparts. Understanding the decision-making processes and feedback loops between the twins is essential to mine and combine the relevant data to transform it into actionable insights. Multiple digital twins that relate to a given product line or asset will inevitably aggregate common data among multiple models, in multiple formats, and from multiple sources. Low-code platforms must be open, powerful and flexible in retrieving and interpreting structured, semi-structured and often unstructured data when it comes to digital twins. This is especially relevant when developing a portfolio of products and services, and focusing on effective continuous improvement. 

Low-code development platforms have low entry barriers, promoting effective creativity and collaboration, from digital twin integration to flexible process customization (similarly to children learning logic, problem-solving and sharing ideas using the no-code platform Scratch). (Image credit: sample project from https://scratch.mit.edu/.)

Low-code development platforms have low entry barriers, promoting effective creativity and collaboration, from digital twin integration to flexible process customization (similarly to children learning logic, problem-solving and sharing ideas using the no-code platform Scratch). (Image credit: sample project from https://scratch.mit.edu/.)

What Data Constitutes Digital Twins?

You might have heard this before: “Not all digital twins are born equal.”

This is true. Though being virtual and simplified representations of the real, digital twins are often complex, intelligent models based on internal and external data constructs, fed with multiformat data input and output, often large amounts of semi- or fully unstructured data, conglomerating various technical properties and business attributes and driven by analytical models. As such, the type of data that they aggregate often resides in different systems and data sources. 

Digital twin models have evolved significantly since the 1950s; under the hood, they have exponentially increased in accuracy and speed to delivery, but also in complexity and scope—still requiring specialist expertise from qualified engineers and technical staff. Do we really understand enough about how digital twins are iteratively created, used and maintained? Can such knowledge be further democratized by low-code platforms? (Image credit: Sketchpad demo with Ivan Sutherland in 1952, extract from https://bimaplus.org/news/the-very-beginning-of-the-digital-representation-ivan-sutherland-sketchpad/.)

Digital twin models have evolved significantly since the 1950s; under the hood, they have exponentially increased in accuracy and speed to delivery, but also in complexity and scope—still requiring specialist expertise from qualified engineers and technical staff. Do we really understand enough about how digital twins are iteratively created, used and maintained? Can such knowledge be further democratized by low-code platforms? (Image credit: Sketchpad demo with Ivan Sutherland in 1952, extract from https://bimaplus.org/news/the-very-beginning-of-the-digital-representation-ivan-sutherland-sketchpad/.)

Beyond the hype, digital twins can be categorized into three core groups:

1. Engineering and simulation models (digital twins supporting product creation, leveraging CAD, CAE, PLM, ERP data): they represent an asset, product or process that does not yet exist physically, one that is being designed and engineered from scratch or modified from carryover data—virtually experimenting and validating its viability and performance ahead or in parallel of its actual physical validation.

These models contain 3D, 2D and 1D representations, with or without CAD data, other geometrical and nongeometrical technical input, material properties, simulation conditions, or associated results—from assembly and clash simulation to kinematics and finite element analysis, but also other mechanical, electrical, fluid, thermal, tooling, machining, forging, composite manufacturing and ergonomics simulation. Such digital twins aim to reduce physical prototyping and improve the design of an asset as early as possible, ahead of its physical validation, industrialization and operations. 

Example: a virtual model to simulate crash tests according to regulation requirements, ahead of validation with a physical prototype of a new vehicle.

2. Industrial and manufacturing models (digital twins enabling Industry 4.0, leveraging Industrial IOT shop-floor and machine sensors, CAD, PLM, MES, and ERP data): they represent a physical asset, product or process that is being produced, commissioned or executed—consuming and integrating data from diverse assets and upstream systems.

These models reuse and complement digital information from product development and contribute to optimize execution delivery and zero-defect product and process optimization, such as assembly line or machine performance improvement. The physical assets drive the outcome and feedback loops continue to drive improvements using their digital twin counterparts.

Example: the virtual optimization of a manufacturing or assembly process on the shop floor, and the continuous improvements in production quality and execution performance—creating new insights from the assembly line that can be fed back to both product creation and the actual assembly process itself.

3. Field operations models (digital twins providing insights into how assets are used “in the field,” leveraging IOT and asset sensors, CAD, PLM, ERP, and CRM data): they represent a physical asset that is in use and is being maintained—consuming data from its usage and also feeding it back to its engineering and industrial digital twins.

These models help operate physical assets, gather analytics for the current and next generation of assets, drive predictive and operational maintenance while minimizing disruption to the operators. The physical assets drive data creation and, in turn, feedback loops help to improve all other digital twins with return on experience from the field—which also feed back to these assets in the form of upgrades and other improvements with the potential to increase their life span or simply their usability.

Example: a real-time maintenance and predictive digital model that helps monitor, adjust and reduce operational costs and risks by simulating how physical assets perform based on fleet data of critical components, such as an airplane turbine or a power station.

Low-Code Meets Insights from Digital Twins

When using low-code development platforms, the objectives are always simplification, acceleration and flexibility. In the context of digital twin models, low-code contributions can be twofold:

  • Drive digital twin improvements, which in turn improve their physical twins (feedback loops)
  • Drive convergence toward a ”single version of truth” across the digital thread (everyone looking at the same data and speaking the same business language), improved accessibility, share and reuse knowledge across multiple digital twins (integration and collaboration); in turn, this implies a reduction of data duplication and misalignment, increased accuracy and improved iterations.

By making digital twins data and their integration more accessible, low-code platforms can create new opportunities for us to understand more about the physical assets that they represent and how to make them better, faster and cheaper. This premise goes beyond the ability to create workflows, integrate multiple IT systems, or migrate data between digital platforms. It looks at the business outcome, and the insight-based decisions required to reach that outcome. 

The following table illustrates basic examples of how low-code platforms could help discover insights from both virtual and physical assets (from both “twins”) and make better informed business decisions.

One can argue that the above insights are already available today. The question is one of simplification, making insights more accessible and finding new ones. For this, low-code platforms will need to rely on their openness, but also the openness of the other enterprise platforms that they interface with. There are still many questions to address, including:

  • Are the required APIs open and fit for purpose in order to access, extract, dice and slice such complex datasets using low-code platforms? 
  • How can low-code languages bridge data across digital twins and their respective data management platforms? 
  • Are we on a path toward mandatory core system modernization? 
  • Will low-code contribute to platform rationalization or will it instead contribute to the duplication of systems and data repositories?
Written by

Lionel Grealou

Lionel Grealou, a.k.a. Lio, helps original equipment manufacturers transform, develop, and implement their digital transformation strategies—driving organizational change, data continuity and process improvement, managing the lifecycle of things across enterprise platforms, from PDM to PLM, ERP, MES, PIM, CRM, or BIM. Beyond consulting roles, Lio held leadership positions across industries, with both established OEMs and start-ups, covering the extended innovation lifecycle scope, from research and development, to engineering, discrete and process manufacturing, procurement, finance, supply chain, operations, program management, quality, compliance, marketing, etc.

Lio is an author of the virtual+digital blog (www.virtual-digital.com), sharing insights about the lifecycle of things and all things digital since 2015.