Google, like all the other giant tech companies, are under pressure to help get rid of extremists’ online presence. And the tech giant’s response to this includes four steps to help flag and remove pro-terrorism content on its pages, especially on YouTube. One way is to rely on its machine learning tech that can automatically flag and take out terrorist videos. At the same time, it’ll keep up “innocently-posted” clips like news reports. Another way is to build on its counter-radicalization system, which will show anti-extremist ads to would-be terrorist recruits and pull ads from extremist videos.
But artificial intelligence won’t be enough, as Google acknowledges. The company intends to “greatly increase” the number of people who are part of its YouTube Trusted Flagger program to help find these terrorist material faster. The company is also working with anti-extremism groups to help get rid of recruiting-oriented content. Google even wants to take on videos on YouTube it sees as containing “inflammatory” religious or supremacist material. These will come with warnings and Google will stop them from getting ad revenue, viewing recommendations, and comments. Whether these measures will work is an entirely different story, but we appreciate the effort.