© Reuters / Andrew Winning
With all its talk of black-on-white war, YouTube’s “hate speech”-filtering AI can’t tell the difference between chess players and violent racists. Perhaps leaving robots in charge of the English language isn’t such a good idea.
Croatian chess player Antonio Radic, known to his million subscribers as ‘Agadmator,’ runs the world’s most popular chess channel on YouTube. Last summer he found his account suspended due to its “harmful and dangerous” content. Radic, who was in the middle of a show with Grandmaster Hikaru Nakamura at the time, was puzzled. He received no explanation for the ban, which was reversed on appeal, but speculated that YouTube’s censorship algorithm may have heard him say something like “black goes to B6 instead of C6, white will always be better.”
“If that’s the case, I’m sure all [all of] my 1,800 videos will be taken down as it’s black against white to the death in every video,” he told the Sun at the time.
Radic was probably correct. Researchers at Carnegie Mellon University ran more than 600,000 comments on 8,000 chess videos through a hate-speech detection algorithm and found them riddled with racism. However, more than 80 percent of the comments flagged by the algorithm were marked in error, as they contained terms like “black,” “white,” “attack” and “threat.”
Computer scientist Ashiqur R. KhudaBukhsh presented the results of his study at an artificial intelligence conference last month, and gave some examples of comments misconstrued as hate speech. “That was one of the most beautiful attacking sequences I have ever seen, black was always on the back foot,” one read.
YouTube relies on human and AI moderators to police content, but with the video platform cracking down on vaccine “misinformation,” on content questioning Joe Biden’s election win, and on all manner of “conspiracy theories,” there’s a lot of content to wade through, and the robots do most of the heavy lifting, with humans correcting their errors.
Yet these robots were designed by humans in the first place. Somebody had to tweak them to the point where any mention of the words “black” and “white” in an adversarial context would be instantly interpreted as racist. How these algorithms are written represents how Silicon Valley sees the world: as a bubbling cauldron of potential hate crimes that needs to be straightened out by a benevolent AI.
If even chess videos are falling under the ban hammer, then how many other incidents of censorship and suspension are in error? Radic has a million subscribers, so his run-in with the content police made the news. But I’d wager that far more smaller accounts get punished for “harmful speech” and are never heard from again. Such collateral damage is inevitable when we allow AI to enforce the boundaries of acceptable speech.
So what’s a chess enthusiast to do? Perhaps future mistakes could be avoided if players adopted the language of the woke. Black pieces could become “Pieces of Color,” and white knights would obviously have to be replaced, lest the game become associated with the Ku Klux Klan. Better yet, “white” and “black” could be replaced with “latte” and “avocado,” a combination sure to offend nobody and dodge the algorithms.
While the issue here illustrates the pitfalls of using AI to interpret a concept as ill-defined as “hate speech,” there are actual lunatics out there who think the game of chess is bigoted to its core. As the western world turned over boulders in search of racism last summer following the death of Geroge Floyd, ABC Australia, the country’s national broadcaster, reached out to a member of the Australian Chess Federation and asked him if the game of kings is racist because “whites always go first.”
Developed in India and brought to white Europe via Persia and then the Arab world, chess was played by “People of Color” long before whites ever got involved. But the fact that this history was lost on ABC is an example of the same narrow-minded worldview informing YouTube’s censors. The racial tensions currently plaguing the US and ginned up by its media are not the driving force behind the history of the entire world, and the colors “white” and “black” aren’t always infused with racial subtext.
We should definitely laugh at anyone attempting to bring racial guilt into every one of our hobbies, and we probably shouldn’t let Big Tech’s robot censors decide what we can and can’t say online.