AI Needs to Learn How to Play Fair

The NSF is granting millions of dollars to promote fairness, equity and ethics in AI. These research projects reveal why.

(Source: NSF.)

(Source: NSF.)

In 2022, the National Science Foundation (NSF) awarded $9.5 million in grants to 13 research teams representing a blend of computational and social scientists. The goal? To promote fairness, equity and ethics in artificial intelligence (AI).

The NSF Program on Fairness in Artificial Intelligence, which began in 2019, provides annual grants between $600,000 and $1 million to researchers studying fairness in AI. Amazon co-funds the program but does not participate in proposal review or award selection.

“Many projects have computational scientists working with city or regional planners and service providers, healthcare workers, agriculture experts or the justice system,” explained Todd Leen, codirector of the program, to engineering.com. “Goals include improving equity in outcomes so previously marginalized communities are better served, and ensuring that the AI systems incorporate human values.”

Inclusive Speech Recognition—Beyond the “Typical” Voice

The program completed its third round of awards earlier in 2022, meaning a number of researchers are in the middle of projects. These include Mark Hasegawa-Johnson, professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign. He sees the program as creating a new standard for AI research.

Hasegawa-Johnson’s team received a grant of $500,000 in February 2022 to complete the project titled “A New Paradigm for the Evaluation and Training of Inclusive Automatic Speech Recognition.” The team’s goal is to improve inclusive speech recognition, which applies to smart speakers, automatic transcription, and other devices to which people can talk.

“Inclusive speech recognition technologies are mostly trained using audiobooks. They perform very well for people who sound like audiobook narrators—so people who are well-educated, native speakers of English, below the age of 70 or so, and have no serious disability,” said Hasegawa-Johnson. “It’s hard to make speech recognition that works well for everybody else.”

Hasegawa-Johnson added that until now, AI speech recognition research has focused on eliminating errors for the so-called typical user. “The word ‘typical’ is sometimes defined in ways that are historically biased,” he said. “This program is trying to challenge us to find ways to eliminate errors for every user.”

Fairness-Aware Deep Learning

Chen Feng, associate professor of computer science at the University of Texas at Dallas, saw his team receive a grant of $392,993 in September 2022 for a project titled “A Novel Paradigm for Fairness-Aware Deep Learning Models on Data Streams.”

Feng’s team is investigating how to design, implement and evaluate deep learning models that are “fairness-aware”—that is, that they can quickly and appropriately adjust to differences between training and test data due to environmental, geographical, economic and cultural contexts.

“For example, the risk levels in loan applications are based on factors associated with applicants, such as employment status and income,” said Feng, noting that these factors have changed quickly for many people due to the pandemic. “Our algorithms and software can help adapt existing loan recommendation systems quickly to distribution shifts in the changed world.”

Fairness Versus Utility

Vishnu Boddeti, assistant professor of computer science and engineering at Michigan State University, leads a team that received a grant of $331,698 for the project, “Fair Representation Learning: Fundamental Trade-Offs and Algorithms.”

This team aims to study trade-offs between the utility and fairness of different data representations and is working to develop new representations and corresponding algorithms to improve those trade-offs.

“The project will provide performance limits based on the developed theory and evidence of efficacy to obtain fair machine learning systems,” explained Boddeti, who said the research will lead to more informed development and deployment of fair AI systems.

The team’s preliminary results show that in numerous instances, it is not necessary to sacrifice utility to be fair.

“Our analysis identified scenarios where there is a trade-off between utility and fairness. We envision that this kind of information is useful for policymakers and practitioners in understanding the inherent trade-offs in making AI systems fair for a given task,” Boddeti said.

Success in Spite of Covid Disruptions

Though the pandemic disrupted the NSF Program on Fairness in Artificial Intelligence as it did so much else, the NSF reports that it is seeing more live conferences and workshops this year, including some with a virtual option.

Despite the setback, the NSF Program on Fairness in AI has already accomplished a great deal to inform dialogue on making AI fairer for all. The organization is actively exploring new ways to help advance the research.

“NSF has pursued mechanisms for public-private partnerships to jointly support fundamental research. These partnerships comprise investments in scientific areas of mutual interest to government, industry, and academia, bringing real-world challenges to university researchers,” said Leen.

A Broader Focus on AI

The NSF Program on Fairness in AI is one part of the NSF’s broader AI strategy. Much of the Foundation’s work in this area stems from the passage of the National Artificial Intelligence Initiative Act of 2020, which requires the NSF to work with the White House Office of Science and Technology Policy to form a National AI Research Resource (NAIRR) Task Force. The task force is currently investigating the feasibility of establishing a NAIRR. The shared computing and data infrastructure would give AI researchers and students across different scientific fields access to compute resources and high-quality data.

Establishing a NAIRR would fuel AI research and development, especially for communities that have been traditionally underserved. The task force is creating a roadmap for how the NAIRR should be established and sustained. On December 7, 2022, the task force will vote on the implementation plan and roadmap. It will then discuss the next steps after submitting its report to U.S. President Joe Biden and Congress.

Another component of the NSF’s work in AI is the expansion of the National Artificial Intelligence Research Institutes program, which began in 2020. The program’s goal is to set up AI institutes across the U.S., which will be funded up to $4 million. So far, this program has set up 18 AI institutes, including the NSF AI Institute for Edge Computing Leveraging Next Generation Networks, which is led by Duke University, and the NSF AI Institute for Foundations of Machine Learning, which is led by the University of Texas at Austin. The AI institutes program is a joint effort between NSF and multiple federal agencies, including the U.S. Department of Agriculture, the U.S. Department of Education, and the U.S. Department of Homeland Security.