A look at the many forms of AI, their differentiators and how to integrate them.
A never-ending challenge for every organization, regardless of why it exists, is keeping track of what’s going on in and around it. This includes identifying new and emerging players and business models, of course, but also three sets of drivers of fundamental change, namely new technologies to be mastered and incorporated into new products, the irresistible trends in the marketplace where one competes, and what I call new trends and processes, and functional capabilities (i.e., elements) impacting and necessary for successful digital transformation (i.e., CIMdata’s Critical Dozen).
As I described in previous articles, CIMdata’s Critical Dozen is an evolving set of elements that an organization must master in its digital transformation journey. This means that yesterday’s dozen isn’t necessarily tomorrow’s. This said, two elements (digital twins and digital threads) of the current Critical Dozen have hit their inflection point, forcing their convergence, and a new element—Augmented Intelligence—has emerged.
Digital Twin-Digital Thread Convergence
As I have often stressed, a digital twin without a digital thread is an orphan. A digital twin, as I described in previous articles, is a virtual representation of a product, a system, a piece of software, a network, and an asset. One or more digital threads keep that representation up to date from the initial concept through its complete lifecycle.
Digital twins and digital threads are co-dependent, always have been, and always will be. Without a digital thread, a digital twin is little more than database clutter with no guarantee of being up to date. And unconnected to a digital twin, a digital thread is a string of data running from nowhere to nowhere. What has changed is the perception of digital thread and digital twin codependence: it is now clearly and unambiguously understood.
Artificial Intelligence (AI) powerfully enhances this convergence by extending the breadth, depth, and reach of digital threads while uncovering hidden drivers and causes of changes in digital twins.
Product Lifecycle Management (PLM) is an innovation engine that helps orchestrate the creation, maintenance, and reuse of assets—digital twins and digital threads, in particular—that represent the enterprise’s products, systems, and networks. Thanks to easier and deeper access to the Internet of Things (IoT), digital threads foster steady enhancements in end-to-end connectivity throughout an asset’s lifecycle.
The Emergence of AI and its place amongst CIMdata’s Critical Dozen
The IoT is often seen a major cause for the unprecedented explosion of data, structured and unstructured—”Big Data”—generated by the billions of connected devices ranging from automobiles, HVAC systems, medical equipment, digital phones, and even smart doorbells.
As part of what we understand to be the Fourth Industrial Revolution, these oceans of data have grown beyond human comprehension and even everyday computational capability. In PLM, this data fills our digital twins, races up and down our digital threads (and everywhere else within PLM), and often surges nonstop between PLM and its adjacent enterprise solutions.
PLM-related AI-enabled applications are now leveraging this, and not a moment too soon. CIMdata wholeheartedly supports the exploration and adoption of AI as the next step in PLM’s central role in digitally transforming the enterprise—and in enabling and realizing how AI is being used to augment human intelligence. This Augmented Intelligence enables human decision-makers and domain experts to use the enormous inflows of data that threaten to overwhelm our digital systems.
As its name implies, Augmented Intelligence, sometimes labeled Intelligence Augmentation, amplifies human and machine intelligence by merging them (i.e., Augmented Intelligence is the implementation of AI-enabled capabilities that add to and aid human insight and decision-making). In contrast, Artificial Intelligence attempts to supersede human intelligence. This distinction aside, product innovation platforms (i.e., PLM platforms) managing specific processes and data will be key recipients and implementers across organizations.
The same challenges in any new technology are at work amid AI’s potential productivity gains. The regulatory and legal frameworks around AI technologies, including those that support Augmented Intelligence, already lag their rapid development. Ethical and moral considerations are already emerging, and implementers must develop long-term strategies for using and sharing data. And with everyone’s expectations changing, workforce skills must be upgraded and customers must be educated.
The Many Forms of AI and How They Differ
Decisive action and implementation require understanding the elements of AI. It enables algorithms to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages translations.
As with any emerging tech trend and process enabler, many different forms quickly appear in the marketplace as soon as the software and hardware are up to the task.
PLM solution providers are busily integrating AI tools and methods, including:
Generative AI (GenAI) is an iterative form of AI that can solve design problems in three steps, making it particularly useful for updating digital models, especially those related to PLM’s digital twins. (1) The design challenge is first optimized to generate all possible solutions. Each design is then (2) evaluated by simulating its performance, dimensional fits, surfaces, and so on. The third (3) step is algorithmic sorting for the unique design that is highest performing; these designs are often shown as oddly lumpy structures. Integrated with PLM, GenAI promises faster digital threads and better updates to digital twins, among many other potential applications.
Large Language Models (LLMs) generate specific text and images on demand by combining words and untangling the ambiguities in everyday language with vectors (and other tools) that rely on neural network architectures called transformers. LLMs, such as the wildly popular ChatGPT from OpenAI, are “trained” on enormous amounts of data, including text and images. ChatGPT and many other “chatbots” are put to work—”prompted”—with instructions in every day language (NL); usually, no coding skills are required. By the way, “GPT” stands for Generative Pre-trained Transformer.
Built on machine learning (see below), LLMs make data problems obvious, cutting through the AI hype and taking us back to AI’s roots. As with GenAI above, LLM integration with PLM promises enhanced and faster digital threads and better updates to digital twins, e.g., from the IoT.
AI in Analytics
For many, the historical beating heart of AI is Analytics, explained quite well in the accompanying Gartner chart, “Source Planning Guide for Data and Analytics.” The graph on the left side of the chart shows the four stages of Analytics—Descriptive, Diagnostic, Predictive, and Prescriptive; the axis labels them from a PLM user’s point of view.
AI-driven analytics can determine an organization’s data’s value, usefulness, and relevance. This is the source of AI’s new data fields that reveal previously unexpected insights, unknown connections, and unseen trends.
Many analytics AI-enabled approaches may prove beneficial to PLM, but two already stand out:
Machine Learning (ML) makes predictions and recommendations from patterns in data—whether structured or not. Over time, data and information can be added, and ML’s predictive algorithms can be enhanced without explicit programmed instructions.
Deep Learning (DL) is a form of machine learning in which a cascade of processing layers and units extract increasingly complex features from previous outputs; the result is the determination of new data in a hierarchy of concepts using dozens of dimensions of analysis. DL is the basis of our ubiquitous “digital assistants” Siri, Alexa, and Cortana, as well as many complex tasks (such as image processing in autonomous vehicles) that are done rapidly by combining images, audio, and video.
For PLM users, AI-enabled analytics benefits are found in critical but everyday tasks, including reducing time spent on routine and mundane tasks, avoiding mistakes, achieving more consistent results, and improving response times in:
- Material selection, manufacturing processes definition, testing and quality control QC, safety, and accident prevention.
- System and software modeling with ChatGPT and other LLM tools.
- Examining digital threads for novel insights and quickly identifying patterns in digital twins.
- Supporting the management of changes to products and their designs in PLM.
- Generation of training data for ChatGPT and other LLM tools.
AI in Enterprise Search
In preparing for my presentation for a recent virtual conference sponsored by Sinequa, a developer of an AI-enabled enterprise search platform, I recognized that many organizations seem to have forgotten that data is the core of digital transformation and that their digital initiatives are doomed unless data is better identified, understood, appreciated, and managed.
Many often overlook their PLM users’ constant, time-consuming searches for information needed in their jobs; that data and information should be in the digital twins and threads and in the digital webs, processes, and systems where the organization’s data is generated and managed.
Data generation itself is a problem. The processes, systems, and smart connected products in today’s digital world generate huge amounts of data, threatening to lurch out of control. This makes finding the right data for actionable insights and sound decisions imperative. Big Data has already rendered most ordinary searching ineffective, so the latest capabilities must be provided, and not just to PLM users.
Leading solution providers like Sinequa also offer and leverage an ever-expanding array of AI-enabled tools and techniques to minimize GenAI’s mistakes and what users label its hallucinations.
A particularly successful approach that Sinequa leverages is Retrieval Augmented Generation (RAG). By expanding LLMs’ information bases, improving context, and eliminating outdated information, RAG addresses the LLMs’ biggest difficulty—being crammed to overflowing with data. RAG’s back-end information retrieval evolves smaller, specialized LLMs, which can revolutionize enterprise search and overcome traditional information boundaries.
With AI discussions prominent in boardrooms and on Wall Street, we can expect that forthcoming PLM solutions and every industrial product and service will likely have some form of embedded AI. Some recent announcements:
- A ChatGPT-based “Copilot” key has been added to the keyboards of two new Microsoft Surface laptops between the keyboard’s arrows and the Alt key. Copilots are an advance over digital assistants in that they are built on LLMs to automate tasks and not just assist with them.
- Google’s LLM generative AI tool Gemini may be added to Apple iPhones later this year; Gemini is already in Android phones made by Google and others.
- To help build its version of the industrial metaverse, Siemens Digital Industries Software and NVIDIA Corp., Santa Clara, Calif., are collaborating to add immersive visualization to the Siemens Xcelerator and its Teamcenter X PLM platforms; the goal is to use AI-driven digital twin technology to enhance the visualization of complex data, with “ultra-intuitive” photorealism. NVIDIA has similar links to Ansys, Lenovo, and Rockwell Automation.
- To improve decisions based on PLM data and to boost supply chain productivity, data analysis, and sustainability workflows, more than 50 generative AI features have been added to the Oracle Fusion Cloud.
- Predictive maintenance aims to reduce equipment failures by continuously monitoring condition and performance via text, video, imaging, etc., using ML, DL, and access to the IIoT.
The future of AI and PLM
As the preceding examples show, opportunities unlocked by AI-enabled solutions will be transformative and even revolutionary. Bringing PLM and AI together will empower human creativity with enhanced abilities to turn ideas into realities. The resulting new products, services, and business models will ensure the entire enterprise’s long-term sustainability.
CIMdata foresees that AI in digital transformation will further strengthen the PLM innovation engine and extend its use, widening the use of AI and broadening demand for it. AI in PLM will prove invaluable in helping us to quickly discover “what we know,” i.e., what we have and can find in our data, along with revealing what we still need (or can’t find), comprehending the real value of our existing content, and finding new ways to generate what we need—thereby augmenting human intelligence in new and improved ways.
If businesses now run on data, what has changed?
Decisions are now based on examining data and poring through what appears to be endless streams of data, data that is assumed to be good. “Good” in this context means accurate and complete, which we all know data never is. Gone is the time-honored basing of decisions on direct observation of designs, production, sales, field service, and customer feedback. The few experienced and long-serving employees and managers still on the job are rapidly being displaced by data processed in many new (digital) ways and governed by poorly understood “standards.”
In other words, running businesses based on “experience” and direct observation is being augmented with new, powerful AI-enabled tools. As part of the Fourth Industrial Revolution, digital transformation—and so many retirements—business decisions are now based almost entirely on data. The overriding concern is whether AI can improve data quality before it is too late.
In an April 2024 Scientific American article titled A Truly Intelligent Machine, author George Musser explains why AI is so compelling: “We now realize that tasks we find easy, such as visual recognition, are computationally demanding, and the things we (humans) find hard, such as math and chess, are really the easy ones.”
The promises and benefits of bringing AI-enabled augmented intelligence into PLM are so huge that the consequences of not doing so are intimidating. But, incorporating Large Language Models into PLM is still in the initial stages while, at the same time, many changes are coming in AI. The PLM transformation will likely be long and bumpy.