Facebook Is Using AI to Remove Extremist Content
Artificial intelligence will be used to reduce the potential for recruiting extremists.
Facebook’s AI is now helping the company function as an intelligence agency.
To stem online radicalization on its social network, Facebook announced on Thursday that it would be using artificial intelligence to remove extremist content.
The new strategy is a response to growing calls from politicians for more accountability by tech firms. British Prime Minister Theresa May made such remarks following the terror attacks in London that killed seven, according to the New York Times.
AI will be used in conjunction with human moderators, who will decide what to take down on a case by case basis. It will first be used to identify content violating Facebook’s terms of service, including the gruesome photos or videos ISIS is infamous for, and stop users from uploading it.
In a blog post, Facebook detailed how its artificial intelligence could be taught to find key phrases used to support a terrorist group or express violent ideology.
The company hopes the language-identifying AI will eventually be able to notice when groups or individuals reappear with a new Facebook profile after being banned for terrorist planning or recruitment.
This article was featured in the InsideHook newsletter. Sign up now.
Suggested for you