Asianet NewsableAsianet Newsable

Here's a look at anti-terrorist policies by Facebook and Twitter

  • Social networks are used to propagate extremist views. 
  • Facebook, Twitter are often blamed for their failure to curb such incidents.

 

Facebook and twitters anti terrorist policies

In 2015, an extremist violent couple went on mass shooting, killing 11 and injuring over 20 people in San Bernardino. Later the investigation found that the couple was inspired by Islamic terrorists and terrorist organizations.

Now, families of the three victims who were killed have filed a lawsuit blaming Google, Facebook and Twitter for creating a platform that is recklessly used by terrorist group ISIS for spreading extremist propaganda and attracting new recruits. In the past, we've seen a $1 billion lawsuit against Facebook for aiding Hamas. 

It's debatable who is to be blamed. During crisis, social media can be a boon to get the word circulated faster, but there is no denying that it can be used by malicious minds too. For instance, Terrorist group al-Shabab is known to have live tweeted during the Nairobi mall attack that took place back in 2013.

Over the years, social networks have been trying to tighten the rules to curb such incidents, let's take a quick look at what leading networks like Facebook, Twitter are strengthening anti-terrorist policies.

Facebook

Last year, when disabling several accounts post terrorist Burhan Wani's death, Facebook had released a statement to several media houses stating, "“Our Community Standards prohibit content that praises or supports terrorists, terrorist organisations or terrorism, and we remove it as soon as we’re made aware of it. We welcome discussion on these subjects, but any terrorist content has to be clearly put in a context which condemns these organisations or their violent activities.”

The Community Standards clearly mentions: Facebook removes hate speech, which includes content that directly attacks people based on their:

  • race,
  • ethnicity,
  • national origin,
  • religious affiliation,
  • sexual orientation,
  • sex, gender or gender identity, or
  • serious disabilities or diseases.

"Organisations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. As with all of our standards, we rely on our community to report this content to us."

Moreover, Facebook can even block an account without owing you an explanation, 'legally'. 

Twitter

Twitter has been taking some active steps to curb hateful and extremist content. It has setup a huge team that monitors such content and claims to have deleted 125000 Islamic State accounts until last year. The team is said to be using human judgement coupled with technology with teams that sift through the content. If a tweet is abuse, you can report it.

Twitter’s rules clearly mention accounts and related accounts engaging in the activities like harassment, violent threats and hateful content may be temporarily locked or even suspended permanently. For instance, if someone makes threats of violence or promotes violence, including threatening or promoting terrorism.

The rules also add that, "You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease."

However, it has also made some errors. Like this one:

While the companies have written down rules, the implementation hasn't been effective. Moreover, these are new-age problems, which cannot be defied with a set of rules. 

Latest Videos
Follow Us:
Download App:
  • android
  • ios