Managing AI risks in digital transformation

MIT recently unveiled an AI risk matrix and it’s surprising how they might impact your manufacturing business.

Many engineers are investigating digital transformation initiatives using artificial intelligence (AI) features within their organizations.

They’re generally aware of various AI risks and are searching for ways to better categorize, understand, mitigate and communicate them.

A group of scientists from the Massachusetts Institute of Technology (MIT) and other universities are also concerned with AI risks, so they built an AI Risk Repository to serve as a common frame of reference for discussing and managing AI risks.


They classified AI risks into seven AI risk domains:

  1. Discrimination & toxicity.
  2. Privacy and security.
  3. Misinformation.
  4. Malicious actors and misuse.
  5. Human-computer interaction.
  6. Socioeconomic and environmental.
  7. AI system safety, failures and limitations.

When engineers use these AI risk domains to conduct risk management for their digital transformation initiatives, they will gain a better understanding of how AI can help their digital transformation initiatives without causing irreparable harm.

Discrimination & toxicity

The discrimination and toxicity AI risk domain consists of the following subdomains:

  1. Unfair discrimination and misrepresentation – Unequal treatment of individuals or groups by AI.
  2. Exposure to toxic content – AI systems expose end-users to harmful, abusive, unsafe, or inappropriate content.
  3. Unequal performance across groups – The accuracy and effectiveness of AI decisions and actions are directly related to group membership.

If this risk becomes a reality, the business impact includes loss of reputation and the distraction and cost of lawsuits. If this risk turns into reality, the societal implications include unfair outcomes for discriminated groups.

Privacy and security

The privacy and security AI risk domain consists of the following subdomains:

  1. Compromise of privacy – AI systems obtain, leak, or correctly infer sensitive information.
  2. Security attacks – AI systems exploit systems, software development tool chains and hardware vulnerabilities.

If this risk turns into reality, the business impact includes data and privacy breaches and loss of confidential intellectual property, leading to regulatory fines and the cost of lawsuits. If this risk becomes a reality, the societal implications include:

  • Compromising end-user privacy expectations, assisting identity theft, or causing loss of confidential intellectual property.
  • Breaches of personal data and privacy.
  • System manipulation causing unsafe outputs or behavior.

Misinformation

The misinformation AI risk domain consists of the following subdomains:

  1. False or misleading information – AI systems inadvertently generate or spread incorrect or deceptive information.
  2. Pollution of the information ecosystem and loss of consensus reality – Highly personalized AI-generated misinformation creates “filter bubbles” where individuals only see what matches their existing beliefs.

If this risk turns into reality, the business impact includes misleading performance information, employee mistrust and potentially dangerous product designs. If this risk becomes a reality, the societal implications include inaccurate beliefs in end-users that weaken social cohesion and political processes.

Malicious actors and misuse

The malicious actors and misuse AI risk domain consists of the following subdomains:

  1. Disinformation, surveillance and influence at scale – Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda.
  2. Cyberattacks, weapon development or use and mass harm – Using AI systems to develop cyberweapons, develop new or enhance existing weapons.
  3. Fraud, scams and targeted manipulation – Using AI systems to gain a personal advantage over others.

If this risk becomes a reality, the business impact includes loss of business continuity, cost to recover from attacks and bankruptcy. If this risk turns into reality, the societal implications include:

  • Manipulating political processes, public opinion and behavior.
  • Using weapons to cause mass harm.
  • Enabling cheating, fraud, scams, or blackmail.

Human-computer interaction

The human-computer interaction AI risk domain consists of the following subdomains:

  1. Overreliance and unsafe use – End-users anthropomorphizing, trusting, or relying on AI systems.
  2. Loss of human agency and autonomy – End-users delegate critical decisions to AI systems, or AI systems make decisions that diminish human control and autonomy.

If this risk turns into reality, the business impact includes misleading and potentially dangerous system outputs. If this risk becomes a reality, the societal implications include compromising personal autonomy and weakening social ties.

Socioeconomic and environmental

The socioeconomic and environmental AI risk domain consists of the following subdomains:

  1. Power centralization and unfair distribution of benefits – AI-driven concentration of power and resources within certain entities or groups.
  2. Increased inequality and decline in employment quality – The widespread use of AI leads to social and economic disparities.
  3. Economic and cultural devaluation of human effort – AI systems capable of creating economic or cultural value reproduce human innovation or creativity.
  4. Competitive dynamics – Competition by AI developers or state-like actors perpetuates an AI “race” by rapidly developing, deploying and applying AI systems to maximize strategic or economic advantage.
  5. Governance failure – Inadequate regulatory frameworks and oversight mechanisms fail to keep pace with AI development.
  6. Environmental harm – The development and operation of AI systems cause environmental damage, partially through their enormous electricity consumption.

If this risk becomes a reality, many businesses will likely collapse due to economic collapse. If this risk turns into reality, the societal implications include:

  • Inequitable distribution of benefits and increased societal inequality.
  • Destabilizing economic and social systems that rely on human effort.
  • Reduced appreciation for human skills.
  • Release of unsafe and error-prone systems.
  • Ineffective governance of AI systems.

AI system safety, failures and limitations

The AI system safety, failures, & limitations AI risk domain consists of the following subdomains:

  1. AI pursuing its own goals in conflict with human goals or values – AI systems act in conflict with ethical standards or human goals or values.
  2. AI possessing dangerous capabilities –  AI systems develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy,cyber-offense, AI development, situational awareness and self-proliferation.
  3. Lack of capability or robustness – AI systems fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences.
  4. Lack of transparency or interpretability – Challenges in understanding or explaining the decision-making processes of AI systems.
  5. AI welfare and rights – Ethical considerations regarding the treatment of potentially sentient AI entities.

If this risk becomes a reality, the business impact includes loss of reputation and the distraction and cost of lawsuits. If this risk turns into reality, the societal implications include:

  • AI using dangerous capabilities such as manipulation, deception, or situational awareness to seek power or self-proliferate.
  • Mistrusting AI systems.
  • Enforcing compliance standards becomes difficult.

Do not expect every digital transformation initiative to have risks in every subdomain. Nonetheless, there is value in explicitly considering every subdomain during the risk identification task.

When digital transformation teams identify and mitigate project risks using these AI risk domains, they will ensure their risk management processes are as comprehensive as possible.

Written by

Yogi Schulz

Yogi Schulz has over 40 years of Information Technology experience in various industries. He writes for ITWorldCanada and other trade publications. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, from the need to leverage technology opportunities and from mergers. His specialties include IT strategy, web strategy, and systems project management.