Facebook deploys AI to fight terrorism on its network

Randall Craig
June 16, 2017

Facebook has a terrorism problem, and its vowing to fix it. We remove terrorists and posts that support terrorism whenever we become aware of them. "When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny". In rare cases, when they uncover evidence of imminent harm, they promptly inform authorities about it.

"We want to answer those questions head on".

According to Facebook, the company finds most of the removed terrorist content themselves.

Facebook has revealed it is using artificial intelligence in its ongoing fight to prevent terrorist propaganda from being disseminated on its platform.

When a user uploads an image or a video, Facebook's AI can check to see whether it matches "a known terrorism photo or video". Some proposed measures would hold companies legally accountable for the material posted on their sites. "We're also learning over time, and sometimes we get it wrong", Elliot Schrage, Facebook's VP for public policy and communications, wrote in a blog post.

Facebook restated its stance against ISIS and Al Qaeda by offering transparency on how it handles content that may support terrorism, attempt to recruit from the platform or spread terrorist propaganda. And the social network is using software to try to identify terrorism-focused "clusters" of posts, pages, or profiles.

This will involve looking for signals such as whether an account is friends with a high number of accounts that have been disabled for supporting terrorism.


She claimed that the company is taking strong new measures to sniff out fake accounts that are created by recidivist offenders.

The statement from the company comes the same week the leaders of France and the United Kingdom said they would push for new laws in Europe to fine companies that don't remove such material promptly.

It's also collaborating with fellow technology companies and consulting with researchers to keep up with the ever-changing social media tactics of the Islamic State and other terror groups. "We agree with those who say that social media should not be a place where terrorists have a voice. Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe", the company said.

Human moderators will in turn ensure that the AI doesn't accidentally sweep up and eliminate legitimate speech.

The technology is the same as that used to block child pornography on the website but, as AI and algorithms are not yet as good as people at understanding the nuances of content and language, the website still needs human reviewers as well. For example, an image of an ISIS flag could be used in propaganda both for and against the terrorist organization-an example Facebook noted remains a challenge.

This team of specialists has "significantly grown" over the a year ago, according to a Facebook blog post Thursday detailing its efforts to crack down on terrorists and their posts. The company now has a 150-person team dedicated exclusively to counterterrorism efforts, which includes academics, former law enforcement officials and engineers, along with a secondary team that can respond to major police requests.

Other reports by Ligue1talk

Discuss This Article

FOLLOW OUR NEWSPAPER