Exploring five more of the most common AI risks and how to mitigate them.
AI functionality is increasingly a component of digital transformation projects. Delivering AI functionality adds business value to digital transformation. However, engineers will encounter multiple AI risks in these projects. Engineers can use these risk topics as a helpful starter list for their digital transformation project risk register.
Let’s explore the last five of the ten most common AI risks and how to mitigate them. To read about the first five, click here.
Inadequate AI algorithm
The AI algorithms available to build AI models vary widely in scope, quality and complexity. Also, project teams often revise the algorithms they’ve acquired. These two facts create a risk of using an inadequate or inappropriate AI algorithm for the digital transformation problem.
Business teams can reduce their risk of using an inadequate AI algorithm by testing algorithms from multiple sources for:
- Desired outputs using well-understood training data.
- Software defects.
- Computational efficiency.
- Ability to work with lower quality or lower volume of data.
- Tendency to drift when new training data is added.
- Explainability.
AI algorithms are a family of mathematical procedures that read the training data to create an AI model.
Inadequate AI model
The risk of an inadequate AI model can result from many factors. The principal ones are an inadequate AI algorithm, problematic rules and insufficient training data.
Business teams can reduce their risk of using an inadequate AI model by testing the model repeatedly using the following techniques:
- Fine-tuning model parameters.
- Functionality testing.
- Integration testing.
- Bias and fairness testing.
- Adversarial testing using malicious or inadvertently harmful input.
The AI model is the object saved after running the AI algorithm by reading the supplied training data. The model consists of the rules, numbers, and any other algorithm-specific data structures required to make predictions when the model uses real-world data for production use.
Insufficient understanding of the data elements
Some data elements or features always impact AI model results more than others. When a project team does not sufficiently understand which data elements influence model results more than others, the situation creates the risk of:
- Inaccurate tuning of the AI algorithm.
- Disappointing or misleading model outputs.
Business teams can reduce their risk of misunderstanding data elements by:
- Testing how dramatically model results change in response to small changes in value or distribution of values of specific data elements.
- Confirming that similarly named data elements across the data sources are identical or not to avoid misunderstanding the data element meanings.
- Ensuring that the data quality of the most critical data elements is the highest.
Data elements are columns in a relational database.
Inadequate team competencies
Given the high demand for AI and data science talent, it’s common for digital transformation project teams not to have all the technical competencies they’d like. Inadequate team competencies create the risk that the quality of AI model results is insufficient, and no one will recognize the problem.
Business teams can reduce the risk of inadequate team competencies by:
- Proactively training team members to boost competencies.
- Assigning enough subject-matter expertise for the various data sources to the project team.
- Engaging external consultants to fill some gaps.
The required project team roles and related competencies are likely to include:
- Business analysts.
- Data scientists.
- Subject-matter experts.
- Machine learning engineers.
- Data engineers and analysts.
- AI architects.
- AI ethicists.
- Software developers.
Insufficient attention to responsible AI
In their enthusiasm for digital transformation project work, the team often neglects responsible AI even though they are not acting unethically. Responsible AI is about ethics. Ethics is an awkward, abstract topic for project teams.
Business teams can reduce the risk of insufficient attention to responsible AI by:
- Scoping of your fairness and bias assessment work based on the sensitivity of the data you will use.
- Investigating the provenance of external data sources.
- Evaluating the compliance and bias of external data.
- Engaging with AI ethicists during design and testing.
- Conducting a fairness and bias assessment of AI model results.
- Designing a process to monitor AI model results regularly for compliance and bias once the AI application is in routine production use.
If you come to believe that the team is consciously acting in an unethical way, it’s time to fire people.
The OECD principles for responsible stewardship of trustworthy AI are:
- Inclusive growth, sustainable development and well-being.
- Human-centered values and fairness.
- Transparency and explainability.
- Robustness, security and safety.
- Accountability.
When engineers proactively identify and mitigate AI risks in their digital transformation projects, they will deliver the planned business benefits.