How Facebook Flags Terrorist Content With Machine Learning | Social Media
Say No to Terrorism
For years, content that promotes terrorism has thrived on social media platforms like Facebook and Twitter.
Fighting it is an uphill battle that has forced tech companies to open war rooms and hire new specialists. One solution that companies including Facebook are now betting on: machine learning. In a recent blog post, the social giant detailed the way it’s using the technology to identify content that “may signal support for ISIS or al-Qaeda.”
Bot Moderators
Facebook engineered an algorithm that assigns each post a score based on the likelihood that it violates the company’s counterrorism policies. If that score crosses a certain threshold, the post will be removed immediately without human moderation.