YouTube has long been in a struggle about how to fight videos which encourage violent behaviors and hateful ideologies. The main problem is that most of these videos don’t actually break any of the platform’s guidelines. Also, if YouTube would decide to delete certain videos and not others, this act could create some unwanted situations. For example, it would badly hurt the platform’s image. Some might see it as an act to restrict people from posting their own videos without facing any censorship. So, on Sunday, Google, the company which owns YouTube, announced that it would implement a series of new policies to help fight these kings of videos.
New policies against certain videos
Those policies appeared in a blog post from Kent Walker, Google’s general counsel and senior vice president, called “Four steps we’re taking today to fight online terror”. The first two steps are encouraging the identification and removal of the videos which specifically encourage terrorism and terrorist acts. However, this might sound easier to do than it actually is. According to Walker, since 2012, there is one hour of content uploaded on YouTube each second. Moreover, according to a report from AdWeek, every ten days, YouTube makes one century of videos. Walker also said that it’s a very difficult job because the news televisions can inform people about a terror attack. YouTube cannot consider that encouragement. The context in which a certain video is uploaded can also change the rules.
Walker also said that adding artificial intelligence to the software might be the key to identifying and deleting those inappropriate videos. The second step is to add more members to the YouTube’s Trusted Flagger Program. Those are users who report inappropriate videos directly to the company.
Fighting against online terrorism
The third step would be to also focus on those videos which don’t directly break any rules. However, they present certain inflammatory religious or supremacist content. The fourth and final step would be to use the targeted online advertising in order to find potential ISIS recruits. After that, the platform would redirect them towards anti-terrorism videos that could possible change their minds. Those guidelines come as many other tech companies are struggling with criticism that they provide a good environment for terrorists.
Image source: wikimedia