How to Deploy AI Responsibly

How engineers can balance privacy, safety and limitations to leverage the power of AI-powered tools responsibly.

The recent widespread accessibility of generative AI (GenAI), engineers and business leaders are enamored with the potential of AI to revolutionize our lives. But governing bodies, researchers and experts in the field are calling for consensus on how to safely and effectively deploy AI.

Michael Connell, Ph.D., is the Chief Operating Officer at Enthought, a company focused on providing AI/ML solutions that accelerate research and development at companies in industries like pharmaceuticals and materials science.

Drawing from his industry experience and advanced degrees in electrical engineering, AI/ML and education, Connell says there are several important points for engineers to consider while using AI-enabled tools and solutions.

AI Disrupted the Globe, Seemingly Overnight

Large Language Models (LLM) such as ChatGPT transformed the world seemingly overnight when they became broadly available to the public in late 2022. Many industries were stunned by the potential of Generative AI (GenAI), and news outlets decried the coming end of the workforce as we know it.

“Where the digital computer was a transformative technology, [AI] is even more profound,” says Connell. For generations, people have warned of the dangers of the next new technology. But this time, the advancement no longer feels incremental. Experts and the public alike are still trying to wrap their heads around everything possible with these types of tools, both now and in the near future.

Indeed, it’s already too late to stop the integration of AI into society. Instead, Connell and other experts want to focus on being thoughtful and considering the bigger picture. What questions will this new technology raise? How will it impact society? How can we maximize these tools without sacrificing safety or privacy?

“AI has very disruptive potential in a remarkably short time frame,” explained Connell. “[As one example] 100 percent of college essays will now be drafted by these types of technologies, and so [ChatGPT] transformed the admissions process overnight.”

So, how do we manage this rapid transition? What will this mean for engineers hoping to responsibly engage with these tools and solutions?

Concern is Growing for Unregulated AI and LLMs

Connell specifically highlighted his concern about the opaqueness of LLMs. Many of these solutions—ChatGPT included—are data-hungry, leading to intellectual property and privacy issues. For example, a note-taking app might be using your notes to train itself, so then who owns those notes and how are their contents kept private? For now, this remains an intellectual grey area that may not be of concern for a casual notetaker but can be a considerable security and privacy breach for technical use cases.

Another ongoing issue is that LLMs are trained using human inputs. So, they incorporate human biases. Many users, especially non-expert members of the public, are under the assumption that output from AI is computer-generated and free of bias. However, anything trained by humans takes on their biases and assumptions, making critical analysis of responses even more important. An AI tool trained using human-generated content can never be objective and should be treated as such.

One example of this reached the news recently when a lawyer used ChatGPT to generate a motion. However, ChatGPT referenced fictional cases and documents, resulting in the lawyer being unable to substantiate any of the claims made in the motion.

In response to these types of issues, Connell raised the important question, “How do we get this system to recognize its own limitations?” Where the natural language processing of LLMs is incredibly advanced, we still need these systems to better integrate with existing tools, recognize their limitations, and then address them using appropriate sources. In the legal example, instead of accessing a legal database, ChatGPT is simply generating text that looks like other motions and inserting fictional cases and references. Connell explains that tools like ChatGPT often generate language patterns that look like references because they cannot recognize the difference between what is and isn’t real.

Beyond law, technical fields also struggle to gain utility from LLMs. Many of these issues arise from the metadata, or “language” of a given field. For LLMs to be useful, they need to be able to “read” certain language and generate appropriate outputs. Sectors such as engineering, chemistry or biology often don’t have the right information structure and quantity of data to train a robust LLM. Where an AI-generated blog post can pull from an internet’s worth of examples, there might not be the same quantity of quality data related to structure-target engagement in drug discovery, as one example.

But the potential is there. With generative AI, a scientist can make an unstructured query into unstructured data and get back a structured answer. As scientists and engineers look towards integrating these types of tools into their workflow, this is where it is crucial to consider the potential biases and limitations of solutions before significant mistakes can arise.

AI: Intersecting Ethics, Philosophy, and Technology

Although literature, film, and television have delved into the grey area between advanced robotics and humans, as a society, we are now truly facing tools that can transform our everyday lives using our own language and mannerisms. Even if GenAI tools like ChatGPT don’t look or sound like a robot, in many ways, they are being treated as a personal assistant or even technical assistant.  

However, where many concerns have been raised about GenAI replacing a large chunk of the workforce, Connell was much more optimistic. He views these types of tools and solutions as eliminating some jobs while leading to the creation of entirely new occupations and fields. Although we still might not know exactly what these roles entail, he is confident it will ultimately transform the workforce for the better.

Beyond the workforce, the use of GenAI leads to other fascinating questions at the intersection of human philosophy and technology. For example, what is art? If many artists take inspiration from other artists in their field, how does this differ from LLMs copying or being inspired by other artists? In the future, who will own AI-generated art or literature? How will this fundamentally alter copyright law and the experience of art, reading, film, and television? Although Connell did not have answers to any of these questions, he emphasized that these types of queries will be essential to consider as more and more companies use these types of tools for things like content creation and marketing.

Another important thought experiment related to the use of AI is an inevitable plateau of the utility of these types of tools. Paul Graham predicted that within two years, about 40 percent of all content could be generated or drafted by an LLM. So, over time, there could be atrophy of these solutions without the continuous input and production of new content from humans. In the short term, people can use these tools to maximize their productivity. But in the long-term, we could lose human talent that is no longer being cultivated, resulting in there being no ability to differentiate between any content.

What Can Experts Do?

Connell and other AI experts are calling for action to integrate responsibility directly into LLMs and other AI-powered tools, especially those accessible to the public.

One key limitation of LLMs and traditional AI is the datasets used to train them. Solutions are only as good as their training dataset, meaning that most tools are at the mercy of the resources, time, money, and engineers available for their development. Connell indicated that governments should invest in the curation and coalition of datasets to train LLMs to reduce bias, improve accessibility, and regulate inputs and outputs for these solutions.

Overall, collaboration between AI experts, governing bodies, and technical stakeholders will likely be necessary to ensure the responsible deployment of AI-based tools. Digital literacy must also be emphasized in school as a new generation reckons with the implications of their AI-powered future.

Steps are already being made in this direction, as Connell recently participated in the Office of Science and Technology Policy’s call for public input as the United States government builds its National AI Strategy. The final strategy will aim to both maximize benefits and mitigate the risks of AI-powered solutions, specifically GenAI.

A reckoning is already ongoing, as a core demand of the ongoing SAG-AFTRA and WGA strikes is increased transparency and regulation of GenAI in the film and television industry. Currently, there is no clear consensus on how AI could be eliminated from the industry. Still, actors, writers, and other creatives are gravely concerned about their jobs, the art form, and a potential future in which these tools are responsible for generating creative content.

Connell laughed as he added, “the complexity of [these issues] is so high, we will need AI to help us solve all the problems AI has created.”