Facebook’s artificial intelligence is becoming increasingly adept at keeping terrorist content off the social network, the company has said.
Today, 99 percent of Islamic State and Al Qaeda-related content Facebook removes is detected by the company’s AI before any user flags it, Monika Bickert, Facebook’s head of global policy management, and Brian Fishman, head of counter-terrorism policy, said Wednesday. They said in some cases the software was able to block the content from ever being posted in the first place.
The executives cautioned that Facebook’s automated solutions are still imperfect, however. “A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda,” they said.
As a result, Facebook said it had concentrated its efforts on IS and Al Qaeda, “the terrorist groups that pose the biggest threat globally.” Facebook hopes to eventually expand its automated tools to target content from other, more regionally-focused terrorist organizations too, according to the executives.
Facebook has faced increased criticism around the globe from governments concerned that the social network provides a platform for terrorist propaganda and recruitment. British Prime Minister Theresa May has been particularly vocal in her attacks on social media companies and has sought to rally the leaders of other democracies to impose greater regulation on these tech businesses.
Partly in response, Facebook has joined forces with Microsoft Corp., Twitter Inc. and Google’s YouTube, a division of Alphabet Inc., to form the Global Internet Forum to Counter Terrorism. The group helps companies coordinate their efforts to combat terrorist content and to share insights with smaller technology companies. Terrorist groups are increasingly turning to newer, smaller social networks as they have been kicked from sites such as Facebook, Twitter and YouTube.
The forum also provides a venue for the tech companies to have their views heard by often hostile governments.
Bickert and Fishman said humans were still needed to curate the databases of terrorist posts, videos and photographs used to train Facebook’s AI software. Human experts were also needed to review the decisions being made by the automated tools, they said.
Facebook is hiring more linguists, academics, former members of law enforcement and former intelligence analysts to perform these roles.
Peter Neumann, who runs the International Centre for the Study of Violent Extremism and Political Radicalisation at Kings College, London, said there has been a dramatic decline in terrorist content on major social media platforms in recent years. He said that artificial intelligence had a lot of potential, but he cautioned that “AI is only as good as the data you feed the machines, and human reviewers are still important to recognize nuance and context.”
He said that terrorist groups were increasingly switching to smaller platforms, such as messaging service Telegram, which use end-to-end encryption and are more difficult to police.
Separately, Facebook said it had partnered with U.K. charity Faith Associates to create a guide aimed at helping the British Muslim community tackle extremism, hatred and bigotry online. The guide suggests techniques for Facebook users to promote positive images of Muslims on the social network. At the same time, it encourages people to report users sharing terrorist-inspired content or encouraging others to join extremist groups.
The guide is part of Facebook’s Online Civil Courage Initiative, a program aimed at countering extremist propaganda, which it launched in June with U.K.’s Institute for Strategic Dialogue, a think tank.