MIT recently unveiled an AI risk matrix and it’s surprising how they might impact your manufacturing business.
Many engineers are investigating digital transformation initiatives using artificial intelligence (AI) features within their organizations.
They’re generally aware of various AI risks and are searching for ways to better categorize, understand, mitigate and communicate them.
A group of scientists from the Massachusetts Institute of Technology (MIT) and other universities are also concerned with AI risks, so they built an AI Risk Repository to serve as a common frame of reference for discussing and managing AI risks.
They classified AI risks into seven AI risk domains:
- Discrimination & toxicity.
- Privacy and security.
- Misinformation.
- Malicious actors and misuse.
- Human-computer interaction.
- Socioeconomic and environmental.
- AI system safety, failures and limitations.
When engineers use these AI risk domains to conduct risk management for their digital transformation initiatives, they will gain a better understanding of how AI can help their digital transformation initiatives without causing irreparable harm.
Discrimination & toxicity
The discrimination and toxicity AI risk domain consists of the following subdomains:
- Unfair discrimination and misrepresentation – Unequal treatment of individuals or groups by AI.
- Exposure to toxic content – AI systems expose end-users to harmful, abusive, unsafe, or inappropriate content.
- Unequal performance across groups – The accuracy and effectiveness of AI decisions and actions are directly related to group membership.
If this risk becomes a reality, the business impact includes loss of reputation and the distraction and cost of lawsuits. If this risk turns into reality, the societal implications include unfair outcomes for discriminated groups.
Privacy and security
The privacy and security AI risk domain consists of the following subdomains:
- Compromise of privacy – AI systems obtain, leak, or correctly infer sensitive information.
- Security attacks – AI systems exploit systems, software development tool chains and hardware vulnerabilities.
If this risk turns into reality, the business impact includes data and privacy breaches and loss of confidential intellectual property, leading to regulatory fines and the cost of lawsuits. If this risk becomes a reality, the societal implications include:
- Compromising end-user privacy expectations, assisting identity theft, or causing loss of confidential intellectual property.
- Breaches of personal data and privacy.
- System manipulation causing unsafe outputs or behavior.
Misinformation
The misinformation AI risk domain consists of the following subdomains:
- False or misleading information – AI systems inadvertently generate or spread incorrect or deceptive information.
- Pollution of the information ecosystem and loss of consensus reality – Highly personalized AI-generated misinformation creates “filter bubbles” where individuals only see what matches their existing beliefs.
If this risk turns into reality, the business impact includes misleading performance information, employee mistrust and potentially dangerous product designs. If this risk becomes a reality, the societal implications include inaccurate beliefs in end-users that weaken social cohesion and political processes.
Malicious actors and misuse
The malicious actors and misuse AI risk domain consists of the following subdomains:
- Disinformation, surveillance and influence at scale – Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda.
- Cyberattacks, weapon development or use and mass harm – Using AI systems to develop cyberweapons, develop new or enhance existing weapons.
- Fraud, scams and targeted manipulation – Using AI systems to gain a personal advantage over others.
If this risk becomes a reality, the business impact includes loss of business continuity, cost to recover from attacks and bankruptcy. If this risk turns into reality, the societal implications include:
- Manipulating political processes, public opinion and behavior.
- Using weapons to cause mass harm.
- Enabling cheating, fraud, scams, or blackmail.
Human-computer interaction
The human-computer interaction AI risk domain consists of the following subdomains:
- Overreliance and unsafe use – End-users anthropomorphizing, trusting, or relying on AI systems.
- Loss of human agency and autonomy – End-users delegate critical decisions to AI systems, or AI systems make decisions that diminish human control and autonomy.
If this risk turns into reality, the business impact includes misleading and potentially dangerous system outputs. If this risk becomes a reality, the societal implications include compromising personal autonomy and weakening social ties.
Socioeconomic and environmental
The socioeconomic and environmental AI risk domain consists of the following subdomains:
- Power centralization and unfair distribution of benefits – AI-driven concentration of power and resources within certain entities or groups.
- Increased inequality and decline in employment quality – The widespread use of AI leads to social and economic disparities.
- Economic and cultural devaluation of human effort – AI systems capable of creating economic or cultural value reproduce human innovation or creativity.
- Competitive dynamics – Competition by AI developers or state-like actors perpetuates an AI “race” by rapidly developing, deploying and applying AI systems to maximize strategic or economic advantage.
- Governance failure – Inadequate regulatory frameworks and oversight mechanisms fail to keep pace with AI development.
- Environmental harm – The development and operation of AI systems cause environmental damage, partially through their enormous electricity consumption.
If this risk becomes a reality, many businesses will likely collapse due to economic collapse. If this risk turns into reality, the societal implications include:
- Inequitable distribution of benefits and increased societal inequality.
- Destabilizing economic and social systems that rely on human effort.
- Reduced appreciation for human skills.
- Release of unsafe and error-prone systems.
- Ineffective governance of AI systems.
AI system safety, failures and limitations
The AI system safety, failures, & limitations AI risk domain consists of the following subdomains:
- AI pursuing its own goals in conflict with human goals or values – AI systems act in conflict with ethical standards or human goals or values.
- AI possessing dangerous capabilities – AI systems develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy,cyber-offense, AI development, situational awareness and self-proliferation.
- Lack of capability or robustness – AI systems fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences.
- Lack of transparency or interpretability – Challenges in understanding or explaining the decision-making processes of AI systems.
- AI welfare and rights – Ethical considerations regarding the treatment of potentially sentient AI entities.
If this risk becomes a reality, the business impact includes loss of reputation and the distraction and cost of lawsuits. If this risk turns into reality, the societal implications include:
- AI using dangerous capabilities such as manipulation, deception, or situational awareness to seek power or self-proliferate.
- Mistrusting AI systems.
- Enforcing compliance standards becomes difficult.
Do not expect every digital transformation initiative to have risks in every subdomain. Nonetheless, there is value in explicitly considering every subdomain during the risk identification task.
When digital transformation teams identify and mitigate project risks using these AI risk domains, they will ensure their risk management processes are as comprehensive as possible.