The most common word in online hate speech is “Jew.”
That doesn’t surprise Brittan Heller, director of the Anti-Defamation League’s Silicon Valley Center for Technology and Society. “I find that anti-Semitism, or anti-Semitic language, is shorthand for lots of different kinds of bias and discrimination,” she said.
This information about hate speech comes from a new ADL report on the agency’s ongoing effort to use artificial intelligence and a concept known as “machine learning” to help stop the spread of online hate and anti-Semitism.
The idea is that if researchers can “teach” a program to find hate speech, that will help online platforms do something effective about it.
“The speed and the scale of the problem need new approaches,” Heller pointed out.
Working with UC Berkeley’s D-Lab, which uses data to study social sciences, the ADL has been analyzing online hate speech since last year in order to find a way to combat it.
It’s early in the process, but researchers have already found a number of interesting patterns in the analysis they did of 9,000 comments on internet message board Reddit.
One is that the top five words most strongly associated with online hate were Jew, white, hate, women and black. The researchers also found that hateful comments were longer and used more words in all caps.
To get these results, a team of researchers tagged comments as hateful or not, based on a third-party definition of what constitutes hate speech, then let the algorithms running the program learn from the hateful comments. Machine learning means a program learns to discern patterns from the input it’s given, rather than being “told.”
For the ADL project, the program “learned” not only the words people use in hateful comments, but how they use them. “There’s indications in grammar, syntax and the sentence structure,” Heller said.
The report also makes it clear that the ADL algorithms aren’t meant to “identify” hate speech exactly, but rather to flag comments that could be interpreted by a person in a targeted group as hateful.
“It seems like an academic distinction, but it’s not,” Heller said. “Tech companies don’t want to be the arbiter of what hate speech is.”
But, she said, tech companies do want to keep their customers happy. So showing them what makes users upset is a good idea.
The next steps for the project include refining the machine learning process to identify not only “hateful” or “not hateful” comments, but also categories of hateful comments, as well as improving accuracy. But even at this early stage, the results can help the ADL and tech companies better understand how online haters think, and how companies can deal with what Heller calls “broken attempts to communicate.”
“The results of the study give me a lot of hope,” Heller said.