This Alphabet-made Chrome extension will filter out toxic comments online
Aptly called Tune, a new experimental Chrome open-source extension from Alphabet incubator Jigsaw hopes to make your online browsing experience a bit more pleasant and free of toxic comments. How does it do that? It uses machine learning. As the name suggests, it allows you to see varying levels of polite or aggressive comments online. It moderates comments on YouTube, Reddit, Facebook, Twitter, and Disqus. “Zen mode” turns off all comments completely, while “volume levels” let you choose from “quiet” to “blaring,” which shows you different amounts of toxicity—which include attacks, insults, profanity, etc.
The extension uses Perspective, which is an API created by Jigsaw and Google’s Counter Abuse Technology team back in 2017. News organizations like The New York Times and The Guardian to experiment with online moderation. You can see in the GIF above how Perspective sorts through comes by toxicity. Since this is an experiment, it’s expected that Tune can be inaccurate when it comes to labeling what comments are considered toxic. Jigsaw’s goal is to show how machine learning can be used to improve discussions online. Should machines get involved in the messy yet nuanced world of comment moderation? We can’t say. And the potential for it to do more harm than good is there. But perhaps it’s a good place to start for a conversation about online decorum.
Source: The Verge
Reader Comments