Deepfake pics and videos set off Facebook’s fake news detector | Cyber Security
Facebook will begin officially checking videos and photos for authenticity as part of an expanding effort to stamp out fake news, the company said last week.
Facebook has already responded to the fake news epidemic by checking articles that people post to its social media service for authenticity. To do this, it works with a range of third-party fact checking companies to review and rate content accuracy.
A picture’s worth a thousand words, though, and it was going to have to tackle fake news images eventually. In a post to its newsroom site on Thursday, it said:
To date, most of our fact-checking partners have focused on reviewing articles. However, we have also been actively working to build new technology and partnerships so that we can tackle other forms of misinformation. Today, we’re expanding fact-checking for photos and videos to all of our 27 partners in 17 countries around the world (and are regularly on-boarding new fact-checking partners). This will help us identify and take action against more types of misinformation, faster.
Facebook, which has been rolling out photo- and video-based fact checking since March, said that there are three main types of fake visual news. The first is fabrication, where someone forges an image with Photoshop or produces a deepfake video. One example is a photo from September 2017, which depicted a Seattle Seahawks player burning a US flag. The image, of a post-game celebration, had been doctored to insert the flag.
The next category is images that are taken out of context. For example, in 2013, a popular photograph on Facebook purportedly showed Raoni Metuktire, chief of the Brazilian Kayapó tribe, in tears after the government announced a license to build a hydroelectric dam. In fact, he was sobbing because he had been reunited with a family member.
The third category superimposes false text or audio claims over photographs. In Facebook’s example, a fake news outlet called ‘BBC News Hub’, superimposed unsubstantiated comments on a photo of Indian Prime Minister Narendra Modi, claiming that he has been lining his own pockets with public resources and is “The 7th Most Corrupted Prime Minister in the World 2018″(sic).
Facebook’s image checking system uses machine learning algorithms to consume various data points around an image. These can include feedback from Facebook users. Flagged images go to the specialist fact checkers, who then use tools such as reverse image searching and image metadata. The latter can tell them when and where the photo or video was taken. They will also use their own research chops to verify the image against other information from academics and government agencies.
The system also uses optical character recognition (OCR) to ‘read’ text from photos and compare it to text in other headlines. The company is also testing new techniques to detect if a photo or video has been tampered with, it said.
Facebook’s announcement came just one day after CEO Mark Zuckerberg posted a lengthy update outlining the company’s progress on stopping election tampering, and how the company has been working to stamp out fake accounts and misinformation. He said:
When a post is flagged as potentially false or is going viral, we pass it to independent fact-checkers to review. All of the fact-checkers we use are certified by the non-partisan International Fact-Checking Network. Posts that are rated as false are demoted and lose on average 80% of their future views.
Facebook may have the best of intentions, but it has tangled with photo analysis in the past and failed. In September 2016, it was forced to back down after censoring a famous historical image of a nine-year-old naked Vietnamese girl running away from a napalm attack. In that case, the company initially vetoed the photograph for violating its community standards.