Services like ChatGPT can be a huge time saver, but they’re also risky. Here’s how to develop clear company guidelines for engineers to follow.
ChatGPT has taken the world by storm. The chatbot and its growing list of competitors—including Google Bard, Microsoft Bing, DALL-E and Midjourney—are examples of generative artificial intelligence (AI), a technology which has been widely adopted with astonishing speed.
ChatGPT went viral primarily because of its ease of interaction and ability to improve productivity in many settings, saving vast amounts of time and money. Generative AI has already had a significant impact on many industries, including engineering.
Given the rapid and often ad hoc adoption of generative AI software services, it’s time for engineering management to establish policies to ensure responsible use and draw attention to risks. Whether, when and how to adopt generative AI should not be left to individual engineers’ judgment.
A short, well-crafted policy on generative AI can enable companies to achieve the benefits and minimize the risks of using this widely applicable technology. Not sure where to start? Here’s a list of what to consider—and what to avoid—in order to craft an effective generative AI policy.
The importance of a generative AI policy for engineers
Generative AI offers many potential benefits to organizations. However, there are several concerns with its use. Many of these arise because generative AI does not possess intelligence, despite superficially compelling examples to the contrary. The concerns include the following:
Errors: Generative AI often creates content that blends accurate and erroneous statements. These errors are labeled hallucinations and occur because the model’s training data contains this blend. Also, generative AI output is highly sensitive to small changes in the prompt text provided. This sensitivity can result in broad and contradictory variations in output that undermines confidence.
Safety: A frequent use of generative AI is to develop software. If the software operates devices such as autonomous vehicles, robots or manufacturing plants, generative AI will contribute software defects that can lead to unsafe or dangerous situations.
Intellectual property: Generative AI can be used to create seemingly new content that infringes on the intellectual property rights of others. The infringement arises because the copyrighted material was included in the model’s training data and can lead to lawsuits and damages.
Misuse: Generative AI can create realistic but fake content. Examples of fake content are fake news, misleading analytical reports and misleading images called deepfakes. Malicious individuals can use such fake content to spread misinformation and disinformation to mislead and manipulate public opinion.
Bias: Generative AI models are biased because the data used to train the models is often biased. Examples include age, racial, ethnic, gender and political biases. The discovery of bias sometimes leads to discrimination lawsuits and damages. Skewed or biased training data can also lead to faulty engineering analysis and dubious recommendations.
Privacy: Generative AI can be used to create realistic fake content about actual individuals. Examples are fake text, images or videos that can violate individuals’ or customers’ privacy or defame them. Privacy violations can lead to fines under Europe’s GDPR or lawsuits.
Ethics: Generative AI use raises broader ethical concerns about a company’s culture, reputation and alignment with its stated mission, values and goals. Examples occur when companies are accused of unethical behavior based on how they employ generative AI results. Examples also arise in art or literature, where the use of generative AI raises ethical questions about the authenticity and originality of the content.
Elements of an effective generative AI policy
In many ways, a generative AI policy extends and clarifies existing policies. An effective generative AI policy should:
- Educate engineers about generative AI opportunities and risks.
- Raise awareness of company policies.
- Avoid constraining innovation.
- Ensure the responsible use of generative AI.
- Reduce the risks associated with this technology.
Engineering management should consider many elements when developing a generative AI policy for their organization. These include responsibility, peer review, intellectual property, transparency, terms of use and compliance.
Responsibility for work products
Engineers are always responsible for their work products’ quality, accuracy and completeness. The emergence of generative AI as a new, additional tool that supports engineering work does not change this fundamental responsibility.
Generative AI is an additional tool. It does not replace research or analysis as previously conducted.
Generative AI sometimes creates erroneous and irrelevant statements. Engineers must be vigilant in removing these from their work products.
Generative AI routinely incorporates copyrighted material owned by others in its results. Engineers must remove this material from work products or take steps to license it.
“The dog ate my homework” has never been an acceptable excuse. Using generative AI does not introduce a new excuse for inadequate work.
Peer review
A company’s established engineering review and governance processes should not be replaced.
The introduction of generative AI, given its limitations, adds the following topics to the company’s review processes to ensure recommendations are defensible:
- Appropriateness of using generative AI for the problem space.
- Explainability and verifiability of results.
- Relevance and adequacy of training data.
- Sufficiency of model sophistication.
- Inadvertent use of copyrighted material.
Using generative AI is not a reason to avoid or circumvent the company’s review processes.
Intellectual property and trade secrets
Engineers are always responsible for safeguarding the company’s intellectual property and trade secrets. Generative AI introduces a new risk of losing or at least revealing these assets, because the AI software service logs all prompts and conversations while not accepting any responsibility for the confidentiality of those inputs.
A company’s intellectual property and trade secrets policy should not change with the emergence of generative AI.
Transparency about the use of generative AI
Engineers should be transparent with their peers and supervisors about how their work products are developed, including their use of generative AI.
Acceptable terms of use
Generative AI use should be limited to business-related purposes and must be aligned with the company’s ethical standards.
Compliance and legal considerations
The use of generative AI should maintain compliance with relevant industry regulations and legal obligations.
What should be avoided in a generative AI policy?
Engineering management should avoid the following potential issues when formulating their generative AI policy, because they will discourage use and constrain innovation.
Restricting access to generative AI
Companies should encourage, rather than restrict, access to generative AI services subject to the company’s policy.
If some engineers want to use highly proprietary corporate data with generative AI, operating a proprietary generative AI environment is advisable. Using the generative AI environment of a cloud service provider (CSP) is unlikely to provide sufficient intellectual property protection.
Onerous review processes
Companies should avoid introducing onerous review and audit processes when using generative AI. Such processes slow work and stifle innovation.
Expecting a static generative AI policy
Generative AI is a rapidly evolving technology, given the enormous amount of attention and investment it is receiving.
As a result, the generative AI policy should be modified and improved as the landscape changes.
—
With a clear generative AI policy, engineers can reap the benefits of these exciting new technologies while minimizing their risk. How’s that for a prompt?