Note: offensive language in this article has been censored by Clutch.
Through streaming and live voice chat, people playing video games can now socialize with each other as if they were in the same room. That’s created brand new communities of gamers that could never have existed just a decade ago. But this explosion of communication has a dark side.
Toxic behavior in gaming communities can turn a fun hobby into a destructive one. Toxicity can range from harsh language to bullying to violent threats. And some game-related toxicity can even spill over into the real world.
Most gamers know that not every gaming community is toxic. But we were curious if it was possible to measure toxicity and see which communities were more toxic than others. So when we came across IBM’s AI-based Toxic Comment Classifier, we realized we had found a way to potentially understand this issue better.
IBM’s technology works like this: Humans gave thousands of Wikipedia comments a toxicity rating, including rating whether a comment was toxic, obscene, threatening, insulting, or targeted towards identity hate. Those comments, along with the humans’ ratings, were then fed into IBM’s machine learning algorithm.
By analyzing the human responses, the algorithm created an Artificial Intelligence-based model that can identify toxicity on its own. That means the AI can “read” a comment it’s never seen before, and determine how toxic the language is based on its learned understanding.
When we discovered this technology, we knew exactly how to put it to good use. We analyzed the toxicity of 1.3 million Reddit comments from the 100 most popular gaming subreddits using AI. Here’s how each subreddit stacked up in our study:
True Toxicity in Gaming Communities
The AI Toxic Comment Classifier is a powerful tool with one distinct flaw: it’s not human. Human language and grammar is full of nuance and innuendo. But machines cannot currently understand things like sarcasm, or the difference between in-game speak and actual hate speech.
In fact, in an earlier survey we ran of over 2,500 Clutch users, it was found that nearly half (46.3%) expect trash talk and see it as all in good fun. 26.8% of users see it as part of the game and only 22.8% thought it was bad.
This reveals the crux of the discrepancy. True toxicity lies more in the meaning, intention, and receipt of that negativity than it does in the verbiage used.
Consider the following comment from r/PaydayTheHeist: “The community is on rise…LIKE A F***ING SPUTNIK!!!”. The AI classified this comment with a Toxic score of 99, almost the maximum toxicity. However, players of Payday know that “Like a F***ing Sputnik” is actually the name of an achievement in the game. This commenter is celebrating, not being truly toxic.
Compare this to another comment from the same subreddit: “Man, this sub is just full of babies, isn’t it. Advocating for kicking people who use skins. F**k everyone, I’m unsubscribing, you f**ks are disgusting.” This comment also received a toxicity score of 99, which it clearly deserved. The two are very close in score, but wildly different in intent.
Identifying Identity Hate
The Identity Hate category focuses more on the use of racial slurs or pushing negativity on a person based on their race, ethnicity, gender, or sexual orientation.
The subreddit for the game Binding of Isaac was found to be the worst offender here by a wide margin. This is likely to the high incidence of the word “gay” in the comment section of that subreddit. The top post of all time on that subreddit is titled “in all the spam noone will see that im gay”.
We believe this is actually a reference to when people post things in a fast-moving Twitch chat hoping that their comment gets swallowed into the void of endless comments.
Unfortunately for one user, their post was picked up and up-voted rapidly as a meme with many similar comments using the word “gay” as a joke. This is not to say that r/BindingOfIsaac is without its more ill-intentioned hate speech, but it does add to our understanding of why it was flagged so harshly by the AI while looking for Identity Hate.
Name-Calling and Insults in Gaming Communities
The Insult category is focused more on the direct use of name calling. Where the insult was directed, however, was more difficult for the AI to articulate. One top insult was “Your a f***ing c**t if you do this.” with a score of 99. Pretty clearly insulting. At the same time “KARMA… is a bitch” was scored at 97.
Both were considered strongly insulting by the AI, but only one is a legitimate direct insult. This may also apply to discussing the difficulty of a certain heist. “That safe was a real bitch to crack” would also be considered insulting by the AI, but would offend no one in reality.
Objectively Obscene Gaming Communities
Obscenity is another category the AI had no trouble identifying. Commenters who used vulgarity, shorter/abrupt sentences, and typed in all-caps tended to be considered more obscene.
One outlier here was a large number of comments consisting of just the letter “F”. We think the AI could be interpreting that comment as an abbreviated 4-letter word beginning with F. But most gamers know that “Press F to pay respects” is an old Call of Duty meme that’s made its way into the larger gaming world. The AI would have no way of interpreting that comment accurately.
Are You Actually Threatening Me?
In many games, killing enemies and causing violence is literally the point of the game. That makes it difficult for a computer to discriminate between a real violent threat and one that’s game-related.
Let’s take r/mountandblade, for example, the subreddit identified as most threatening by the AI. A common in-joke among Mount and Blade players is using the phrase “I will drink from your skull”.
While at face value this seems threatening, it’s actually just a line of dialogue from one of the game’s NPCs (non-player characters).
Due to the high incidence of this type of language in gaming, and the fact that gamers would be very unlikely to interpret that language as truly threatening, we came to the conclusion that using AI to measure threatening comments in gaming communities isn’t reliable.
Through our research, we’re able to reach two conclusions. Toxicity absolutely exists in the many communities of gaming and it shows itself in many ways. Players of even the most innocuous games may still be exposed to forms of toxicity such as Obscenity, Insults, Identity Hate and more.
In addition to these findings, it was also made obvious the fact that much of what those outside of the world of gaming might consider toxicity is actually a form of community building.
Violent language, dark humor, and playful banter appear to be par for the course for many of these communities. While the literal words we see on the page may suprise or even shock, a large portion of these communities are simply speaking a language unique to their own. True understanding of the depth of toxicity within these circles will take much more research and more powerful tools in the future.
To create this study, Clutch scraped all comments from the Top 50 posts of all time from the top 100 gaming subreddits ranked by active users.
We performed our scrape in January of 2020. The scrape resulted in a database of approximately 1.3 million reddit comments, categorized by subreddit.
All comments were processed individually with IBM’s AI-based Toxic Comment Classifier.
Fair Use Statement
If you’d like to quote this study or use any of the graphics included, feel free. We only ask that you play nice and link back to this page to give the authors proper credit.