Pattern Recognition AI Algorithm Fights Online Bullying

Fast judgement and 1/5000 false positive accuracy result in 92 percent reform rate of online bullies.

Riot Games, the developer behind the popular massive multiplayer online (MMO) video game League of Legends (LoL or League), may have engineered the answer to online abuse and bullying with the help of good ol’ AI algorithms.

A screenshot of League of Legends gameplay.

A screenshot of League of Legends gameplay.

Games like LoL are competitive in nature, occasionally hosting championship tournaments offering millions of dollars for first prize, and pitting two teams of players to duke it out in ranked battles to the virtual death.

In less formal settings, players at home sometimes lose their cool and vent at each over in-game audio. In the most extreme cases, this can escalate into slur-slinging and hate speech.

In response to these issues, Riot Games took to policing its LoL community through the Tribunal, introduced in 2011.

This in-game system assigned participating players case files to review, which included chat logs, game statistics and other report details. The judging player would then decide whether the offending player should be “punished” or “pardoned.”

Jeffrey Lin, lead designer of social systems at Riot Games, stated 98 percent of the community’s verdicts were in line with those of Riot Games administrators, according to MIT Technology Review.

Lin and team were able to pick up on patterns in the offensive language and applied an unnamed pattern recognition AI algorithm to the chat logs as a test. “It turned out to be extremely successful in segmenting negative and positive language across the 15 official languages that League supports,” Lin said.

With the Tribunal “down for maintenance” for over a year, the new AI system has apparently worked wonders. Several million cases of suspected abusive behavior have been reviewed, with 92 percent of convicted players not reoffending.

This is a radical reform rate, numbers that real-world law enforcement can only dream of. So how did they manage it?

Punishments were doled out as restrictions to in-game content. Players could have limited chat resources, be blocked out of competitive play or even have their accounts disabled.

But it wasn’t the punishment itself that resulted in the system’s success.

“When we added better feedback to the punishments and included evidence such as chat logs for the punishment, reform rates jumped from 50 percent to 65 percent,” Lin explained. “But when the machine learning system began delivering much faster feedback with the evidence, reform rates spiked to an all-time high of 92 percent.”

Cannot Compute Sarcasm… Or Microphone Audio

The new system’s success is miraculous considering its flaws.

Sarcasm is common in human language, especially in competitive situations. Picking up on this type of phrase is what poses the greatest challenge for Riot Games’ AI in determining who really is or isn’t trying to offend.

To compensate for the difficulty, Lin and the Riot team have implemented a collection of subsystems to analyze and double check key information. “Because multiple systems work in conjunction to deliver consequences to players, we’re currently seeing a healthy 1 in 5,000 false-positive rate,” Lin said.

A further issue is the question of whether or not this new system could record and analyze speech over a microphone.

The previous Tribunal system was incapable of enforcing its rules over players with headsets, as only text chat logs could be moderated. If the new system can only operate within the same medium then only a portion of the community can be defended from itself.

Verbal abuse will still be a common issue that players will only be able to avoid by turning their microphones and related audio off, confining themselves to communicating through text. It’s not the most convenient way to interact, but for some players it’s good enough.

“I [would] rather not listen to people whine and ***** about each other,” said a player under the username “ItsABadBromance, over a LoL user forum discussing the use of microphones. “It’s one thing to deal with the text, but to listen to their voices, is a whole nother world of agony.”

From Video Games to a Broader Online Community

Although the context of this program takes places within a competitive game, similar software could have a place in monitoring forums or similar web portals for the sake of curbing online bullying and abuse.

“One of the crucial insights from the research is that toxic behavior doesn’t necessarily come from terrible people; it comes from regular people having a bad day,” said Justin Reich, research scientist from Harvard’s Berkman Center. “That means that our strategies to address toxic behavior online can’t be targeted just at hardened trolls; they need to account for our collective human tendency to allow the worst of ourselves to emerge under the anonymity of the Internet.”

Lin believes the team’s text and pattern recognition techniques can be applied outside just video games and hopes others within and outside the industry can use them for the greater good.

“The challenges we’re facing in League of Legends can be seen on any online game, platform, community or forum, which is why we believe we’re at a pivotal point in the timeline of online communities and societies,” Lin said. “Because of this, we’ve been very open in sharing our data and best practices with the wider industry and hope that other studios and companies take a look at these results and realize that online toxicity isn’t an impossible problem after all.”