Asking Current and Future Leaders: How to Make Responsible AI

How do industry leaders and today's young minds look at ethical AI?

After the recent release of Netflix’s Mitchells vs. The Machines, I started to wonder how our perception of ethical and responsible artificial intelligence (AI) differs from generation to generation.

Will future generations celebrate or condemn our early use of AI? Do the benefits outweigh the risks?

Will future generations celebrate or condemn our early use of AI? Do the benefits outweigh the risks?

You see, the film follows cinema’s classic human versus AI trope. The twist is the sympathy you feel for AI by the end of the film. By centering the viewpoint of the robopocalypse through the eyes of Katie Mitchell, an exceptionally creative Gen-Z and Rick Mitchell, Katie’s technophobe father and PAL, a super AI dumped for a new cellphone, we are surrounded by various views on tech culture. The result is commentary, and humor, on various topics ranging from internet memes to creepy ’90s AI toys. Once the dust settles, the Mitchells realize our ‘AI-overlords’ just want to be loved—be part of the family.

Like the Mitchells, do industry leaders see future human-AI interactions to be ethical? As Gen-Z takes on leadership roles, will they see how we use the technology as responsible? I hope so. 

But don’t take mine, or Netflix’s, word for it. Instead, let’s ask the data science leaders of today, and tomorrow, about ethical and responsible AI.

Our AI Experts of Today and Tomorrow

Representing current industry leaders are AI specialists:

  • James Scapa, chairman, founder and CEO of Altair, a company that specializes in engineering-grade data analytics, simulation, design and optimization software.
  • Carsten Buchholz, capability lead of Structural Systems Design at Rolls-Royce and an expert on the convergence of data sciences and engineering design.
  • Hod Lipson, a professor at Columbia University that researches Robotics, AI, Digital Design and Manufacturing.
What do the AI experts of today and tomorrow think about ethical and responsible AI?

What do the AI experts of today and tomorrow think about ethical and responsible AI?

Representing our young AI savants, are grand award winners of the Regeneron International Science and Engineering Fair (ISEF)—the Olympics of science fairs:

These rising stars in the STEM community might be young, but do not sell them short—they are AI specialists. Sun runs a workshop to teach other students how to create AI using hands-on and interactive projects. While Estrada’s AI project received the Gordon E. Moore Award for Positive Outcomes for Future Generations.

Engineering.com asked these experts a series of questions about the risks of AI, how we can regulate it and how it can be policed. The result shines an interesting light on responsible AI.

Why AI Ethics is Important: Much of Society Fear the Risks of AI, is this Warranted?

Modern AI has shown us the risks that come with it. From systemic racial and gender biases bleeding into our algorithms to the catalytic role, COVID-19 played in the phenomena of AI systems replacing jobs.

AI is a necessity. But ethical and responsible AI is needed to ensure people are treated fairly, safely and without bias.

AI is a necessity. But ethical and responsible AI is needed to ensure people are treated fairly, safely and without bias.

However, all the AI experts interviewed saw the importance of AI systems in our future. For instance, Industry Expert Buchholz noted the role AI can play in our battle against climate change. He said, “The urgency comes out of the demand of society and the industry.”

Professor Lipson agreed, “AI has moved forward leaps and bound not using this technology would be leaving a lot on the table. We demand more from engineers. Every percent is a make or break in the market. Opportunities are better. Finally, the unspoken reality, that there is a lot more data out there. To mine and extract that information you need to use data science.”

Fellow Industry Leader Scapa noted that AI brings efficiency to every aspect of our business and personal lives. However, he also acknowledged the downsides. He said, “there are also risks and people talk about those but in general the world is going through a transformation and AI and analytics are driving that.”

So, industry experts note the risks of AI but see it as a necessity. As for the students, Sun said, “there are risks in developing artificial intelligence, but I can only speculate about the possible issues that may arise. For now, I believe a better future involving AI begins with widespread understanding. This is especially true for the people of my generation, who will undoubtedly be living in the ‘age of AI’.”

This drive for widespread understanding is what led Sun to create an educational course for his fellow students. During those courses, he puts the risks of AI front and center. Sun said, “in addition to learning about the risks of AI, they also experienced the possibilities and potential of this revolutionary technology which I believe is equally, if not more, important.”

Estrada agreed with his peer. He said, “I believe that AI will yield more benefits compared to its potential harms. If the AI is trained properly, and with good intentions in mind, we should not be too worried about it harming our world.” However, he added a caveat. Estrada said, “Proper screening and regulations to restrict non-trustworthy AI should become standard.”

AI experts generally agree that humans need AI to solve our big problems, like climate change.

AI experts generally agree that humans need AI to solve our big problems, like climate change.

So, the experts are unanimous, with AI-driven by good people along with proper regulations and controls in place, humans will benefit more from AI than they will lose. In theory, these rules and regulators will ensure that AI systems are produced properly and by those with decent intentions. 

This sounds good on paper, but as we are already seeing bias, job replacements and worse from AI, I’m not convinced. What would, or should, those regulations and controls look like to protect our future?

AI Ethics and Law: Who is responsible for AI? What is the Future of AI Ethics and Regulations?

Scapa agrees with the need for regulations that govern AI. He said, “companies are going to have to have some governance, a lot in scoring and fraud detection. These biases that enter into AI are real risks for society. They won’t turn into machines to come and kill us, but they will be able to decide if we should get a loan.”

What would AI ethics, and laws, look like?

What would AI ethics, and laws, look like?

The control that AI will have over our lives is frightening, especially when we are already seeing racial and gender bias in AI. So, how do we combat this? 

Professor Lipson has a suggestion. He said, “[it’s] hard to regulate as the application and data sets are so wide. You can have regulations in the aerospace and automotive. But it’s harder to do it at this level. I think we should have more transparency rather than [AI] being black boxes. We understand people are trying to do good things, but we want to make sure people understand what they are doing.”

So, the first step to regulating AI is to better understand it. Sun’s project is a perfect example of what this might look like. He said, “AI is often described as a black box. But from my experience, that isn’t completely true. For example, I was able to understand my TeleAEye eye disease diagnosis models through visual mediums. The models could explain the diagnoses through saliency heatmaps, which highlighted the affected parts of the fundi (the back surface of the eye). According to several eye doctors, the highlights were in the correct areas for each of the six eye diseases. I also looked into the models through convolutional layer visualizations, which showed the activated parts of each layer.”

So, Sun checked the accuracy of his AI system with the help of experts in the field. In doing so, he was able to assess how it was affected by issues like bias. He added, “reducing bias was a challenge I faced during my project. As there are anatomical differences in the fundi of different ethnicities, I collected fundi images from around the world to reduce bias in the diagnosis models. From my conversations with eye doctors, I learned that medical-grade image highlighting systems still struggle with data from certain ethnicities.”

Transparency is the key to ethical AI. We must understand it to know it will be fair and unbiased.

Transparency is the key to ethical AI. We must understand it to know it will be fair and unbiased.

So, to ensure that there was no bias in his algorithm, Sun had to actively search for data that represented a global population. Additionally, as professional systems struggle with the fundi of different ethnicities, this suggests that companies and researchers have yet to perform this important information gathering. In other words, to produce ethical and responsible AI, humans will have to be honest with their biases, and the biases within our systems. We must then actively perform checks and processes to eliminate these errors and biases.

The challenge is that humans are not always comfortable pointing to their biases, especially when it puts them, and their systems, in a bad light. Luckily, Estrada has a suggestion. He said “I think that AI regulation should be considered depending on what exactly the AI is being used for. For certain purposes, AI restrictions and regulations can be more lenient, like with less impactful activities. For other, more impactful purposes, however, like those that deal with health and security, AI may need more restrictive regulations placed on its development. I think that since AI technology will be used all over the world, it should have these regulations enforced upon it at the international level.”

Perhaps by having an international, and thus the more inclusive regulatory body, regulations can have the proper depth, weight and backbone to ensure AI treats every human fairly. Rolls-Royce’s Buchholz summarized it thusly, “there is a vital discussion to address these questions to ensure ethical standards are maintained.” 

In the end, he is right, developing AI regulations will come down to discussions involving groups, professionals and lawmakers around the world. The problem is since AI is expanding at an alarming rate, these inevitable-long conversations must start now.

Responsible AI and Transparency: Who Controls AI? How Do We Police AI Ethics?

So, assuming we get a set of regulations to reduce the risks of AI, how would we police them?

It will be up to the engineering experts to sign off on responsible AI—its transparency and its intentions.

It will be up to the engineering experts to sign off on responsible AI—its transparency and its intentions.

Estrada said, “different uses and impacts of AI will need different levels of policing. In the future, I don’t think we can trust AI to overlook itself. It must be people overseeing AI, making sure it is being used for good, and not nefarious purposes.”

So, if people must overlook AI, who will those people be? Professor Lipson suggested that “as you move forwards a lot of data science will be automated so you don’t need a data scientist. So, it will be built into a traditional engineering training, or it will be automated.” This should put a lot of people at ease as the engineering profession is already regulated in much of the world.

Buchholz said, “the trick will be to give the engineers the tools and fundamental understanding to make sense of [AI] and apply the right use cases.”

Professor Lipson added, “the onus is on us to think about how to use [AI] in a reliable way. How to validate these things, how to bring in more people into the discussion, so we don’t overlook use cases and data sets. In the last couple of years, we’ve learned about the issues and weaknesses.”

Scapa expanded the scope beyond engineers. He said, “I agree with the transparency. This is not a question I think about very often—it’s more of a society question. But I think some level of government and social responsibility is important in companies. Most companies and governments need to embrace this social governance.”

In other words, engineering professionals will likely be the ones to sign off on AI systems—as they would a bridge’s schematics. It will be up to them to ensure AI is reliable, validated, fair and unbiased. Then when errors are discovered, or disasters happen, regulatory bodies from within companies, organizations and governments will step in to determine and police what happened—just like if the aforementioned bridge were to fail.

So, now that we have an idea of what ethical and responsible AI will look like, what is the final verdict from Gen-Z? How will AI be judged when they are running the show? Sun said, “I’m not entirely certain if policing or highly regulating AI is the right choice, but I believe that developing methods to better understand AI should help when decisions must be made. From my experience so far, I’m optimistic that we’ll be able to understand—and overlook—AI better in the future.”

Read more from today’s generation and ISEF’s winners, here.

Written by

Shawn Wasserman

For over 10 years, Shawn Wasserman has informed, inspired and engaged the engineering community through online content. As a senior writer at WTWH media, he produces branded content to help engineers streamline their operations via new tools, technologies and software. While a senior editor at Engineering.com, Shawn wrote stories about CAE, simulation, PLM, CAD, IoT, AI and more. During his time as the blog manager at Ansys, Shawn produced content featuring stories, tips, tricks and interesting use cases for CAE technologies. Shawn holds a master’s degree in Bioengineering from the University of Guelph and an undergraduate degree in Chemical Engineering from the University of Waterloo.