Facebook has a plan to protect the U.S. midterms. Is it enough? | Social
The social network would like to avoid an election disaster like the one that hit it in 2016.
Two weeks ago, on a hastily scheduled conference call with journalists, Facebook executives announced what many felt was inevitable: Someone, perhaps Russia, was once again trying to use the social network to “sow division” among U.S. voters, this time before November’s midterm elections.
The “bad actors,” as Facebook called them, created bogus events; posted about race issues, fascism, and President Donald Trump; and paid Facebook to promote their messages. “Some of the activity is consistent with what we saw from the IRA before and after the 2016 elections,” Facebook’s head of cybersecurity policy wrote in a blog post, referring to the Internet Research Agency, a Kremlin-backed online troll farm.
That activity, of course, may have altered a U.S. election, and sent Facebook and CEO Mark Zuckerberg down a path of self-reflection that has changed Facebook’s strategy, as well as its mission.
There was one big difference, though, between the disinformation campaign Facebook announced in July and the Russian campaign from 2016. This time, Facebook caught the bad guys — at least some of them — before the election.
It was a conflicting revelation. On one hand, Facebook’s safeguards to prevent another election interference campaign appeared to work. On the other, it was a sign that Facebook will once again be a target — or perhaps a weapon — for people who want to divide American voters ahead of the 2018 midterms and destabilize support for government officials.
Harvard lecturer Eric Rosenbach is bracing for the latter. As the former assistant secretary of defense for Homeland Defense and Global Security, and the former chief of staff for Secretary of Defense Ash Carter, Rosenbach knows how foreign threats like to operate.
“My greatest fear, and I hope I’m wrong, is that the Russians, or maybe it’s the Iranians — they’ve already started working on these things, they’ve already conducted penetrations of campaigns, and they’re getting set to go to the next stage of conducting an infowar at the time that will most hurt the candidates that are in key states,” he said in an interview with Recode. “A week or a couple days before the actual election day and midterms, [they’ll] carpet bomb the internet using Facebook and Twitter.”
Will Facebook be ready? The company says it’s moving quickly on its plan — which includes a physical war room to monitor the elections from its corporate headquarters in Menlo Park, Calif. — and has promised to double the number of safety and security employees on staff to 20,000 people. Facebook says it’s spending so much money monitoring political ads that it will actually hurt profits.
But Facebook is also running out of time to execute its plan. With the midterms less than three months away, it’s almost go time.
“When over half of Americans get their news from Facebook, it’s pretty damn important,” said Sen. Mark Warner, D-Vir., who has been one of the country’s most outspoken critics of Facebook’s role in elections. “We’re starting to see the enormous success of the Trump campaign in using social media. I think it’s changing the paradigm.”
Facebook’s plan
You can boil Facebook’s election plan down into three main challenges:
- It wants to find and delete “fake” or “inauthentic” accounts.
- It wants to find and diminish the spread of so-called fake news.
- It wants to make it harder for outsiders to buy ads that promote candidates or important election issues.
Facebook’s top priority is finding and deleting “fake accounts” — either automated bots, or Pages and profiles operated by a real person pretending to be someone else — which are usually responsible for Facebook’s other major problems, like disinformation campaigns and misleading ads.
“By far, the most important thing is going after fake accounts,” COO Sheryl Sandberg told a roomful of journalists back in June. “If you look at the things that happened in the [Russian] IRA ads on our platform in 2016, all of it was done though fake accounts.”
Fake accounts are easy for Facebook to quantify, and make for nice headlines. Facebook took down almost 1.3 billion fake accounts in the last six months alone, and has routinely highlighted the number of fake accounts it takes down ahead of foreign elections. (For context, Facebook has about 2.2 billion monthly active users, and the company has estimated in the past that 3-4 percent of those are “false accounts.”) Before France’s presidential election in early 2017, Facebook deleted 30,000 fake accounts. It took down “tens of thousands” of accounts before Germany’s national elections last fall.
But finding the kinds of sophisticated networks trying to influence elections is much tougher. Facebook execs say they’re getting better at spotting them, in part because it knows the type of behavior those accounts exhibit. The Russian IRA accounts that used Facebook to try and influence the 2016 presidential election also provided a lead to other networks.
“Those kinds of investigations actually launch a whole bunch of other investigations,” said Samidh Chakrabarti, who leads product for all of Facebook’s election-related efforts. “What are all of the Pages that those accounts [operated]? And who are all the other admins of those Pages?”
“That’s what we call a ‘fan out.’ You basically start from a seed and you see who are potential co-conspirators.”
Chakrabarti won’t say how many investigations Facebook has in the works, but says they’re running several at the same time. He also won’t say whether or not Facebook knows of any other coordinated misinformation campaigns. “We’re always looking,” was all Chakrabarti replied.
Facebook says these “bad actors” are getting more sophisticated now that the company knows what to look for. When Facebook announced it had found the coordinated information campaign a few weeks back, it also confirmed that these “bad actors” were trying to hide their location using VPNs, and paid for ads through third parties. Facebook wouldn’t say who was behind the campaign, only that it was working with law enforcement and Congress to try and find out.
Working with government agencies is something Chakrabarti says Facebook is also doing much more of today than it did in 2016. “There are numerous leads that come from a lot of different places,” Chakrabarti said, though he wouldn’t get into details. “We have a lot of different lines of communication open with different agencies on this.”
Sen. Warner, who sits on the Senate Intelligence Committee, says Facebook’s relationship with Congress has improved, though “grudgingly.”
“It was not until the summer of 2017, after trips out to the Valley, after sitting with Facebook leadership, that they started to come clean,” Warner said. “Have they gotten better since then? Yes. … But people just aren’t buying the ‘we do no evil’ and the self-righteousness of all the social media platforms.”
Facebook’s efforts to stop so-called fake news may be an even tougher obstacle for the company given the challenges that come with separating black-and-white truths from personal opinions. The company’s efforts to flag false news and work with outside fact-checkers have been well documented, but deciding how to respond to bad information has still caused the company headaches. (Remember Mark Zuckerberg’s Holocaust denial statement? Or the company’s response to Alex Jones and Infowars?)
One effort that may be working — or at least creating hurdles for potential bad guys — is Facebook’s updates around election advertising. Following the 2016 election, in which Russian trolls bought thousands of dollars worth of ads that reached millions of people, Facebook created a dashboard where users can browse through all political ads that appear on Facebook’s services. The company also started requiring that political advertisers register with the company, a process that included responding to a physical mailer Facebook uses to verify an advertiser’s address.
While the move was intended to keep foreign actors from advertising for U.S. candidates, it created a bit of a hiccup for legitimate campaigns as well. Brian Rose, who ran for Congress in a special election in Mississippi earlier this year, found out his Page wasn’t approved to run ads in late May, less than two weeks before his election. “This is a devastating blow,” he told The Verge. Rose lost in a landslide.
(Facebook, which first announced the new process in October of 2017, believes it gave people plenty of warning. The company has previously provided presidential campaigns with assistance in understanding how to use its products — the same kind of assistance Facebook says it would provide any major corporate advertiser — but it isn’t doing that for the midterms, a company spokesperson confirmed.)
Catherine Vaughan is the co-founder and CEO of Flippable, a political action committee to help put more Democrats into office at the state level. Sometimes Flippable runs ads, which meant that Vaughan needed to register with Facebook, a process that involved scanning her driver’s license and responding to Facebook’s physical mailer. “We didn’t have a lot of clarity as to what was required,” Vaughan said in an interview with Recode, adding that the PAC couldn’t run ads for several weeks. “I just wanted to make sure I was doing everything by the book, but I also didn’t want it to get like actually lost in the mail.”
Facebook is running out of time
Facebook’s Chakrabarti has been building tech products focused on civic engagement for years. Before joining Facebook in mid-2015, he worked down the road at Google, “organizing the world’s information about politics, elections, and government,” he wrote on LinkedIn.
The one challenge that comes with building products geared toward elections? You can’t finish them late.
“Time is always your enemy,” he said. “With most products you launch, you can always move the launch date. You can’t move an election date.”
“I feel like we have a good handle and a good plan for many of the problem types that we’re seeing. But whether we will get far enough, fast enough, is really the question,” he added. “It’s just a question of time. I would love if the U.S. midterms were in 2019.”
For all the power and influence Facebook has, slowing down time isn’t one of them, and the company is rapidly running out of it to prepare for the midterms. There are still areas where Chakrabarti believes Facebook can improve.
It will proactively look for people who share misleading voting information, for example, an issue in 2016 when users on Facebook and Twitter tried to encourage people to vote by text message, a system that does not exist. Chakrabarti says Facebook needs to improve catching misinformation that isn’t in text. That means finding fake photos, videos and even audio that may circulate as the elections get closer.
Facebook is approaching fake photos and videos the same way it’s approaching fake stories: Responding to flagged posts by using outside fact checkers. Facebook uses a combination of human reviewers and machine learning technology to detect what’s fake, but eventually hopes to use machine learning to proactively find false photos and videos, too.
One other new approach Facebook will take this year: The company plans to set up an actual, physical war room in its headquarters around election time to monitor activity on the service in the days and weeks leading up to the midterms. The company has had digital war rooms in the past, but hasn’t had a legitimate physical war room for a U.S. election.
“It’s going to look like a computer lab,” Chakrabarti said, with screens and computers monitoring different metrics that Facebook finds important, like user reports, in real-time. Some of the metrics will even have alarms associated with them to alert Facebook employees if there’s an unexpected dip or spike.
But more than the technology, Chakrabarti says the idea is to get people from all of Facebook’s important teams — engineering, data science, public policy — into the same room.
“The war room isn’t so much about the technology that’s there as it is about the process of having people across different functions … be able to diagnose and fix any sort of acute issues that we see,” he said.
Despite all of Facebook’s plans — and repeated promises by the company that it’s taking its role in American democracy seriously — not everyone is convinced.
“What happened to the United States and the election here in 2016 and Facebook’s role in it is one of the most serious national security issues the U.S. has suffered in the past couple of decade,” Rosenbach said. “Facebook is making some progress, but it’s clearly not at the rate that they should in order to address pretty serious issues of national security.”
Facebook, which at first downplayed its role in the 2016 election and then made repeated privacy and security blunders for much of the past 18 months, no longer gets the benefit of the doubt. The company is making changes and has preached repeatedly that this is a top priority internally, but Sen. Warner believes it has only been thanks to extreme pressure from politicians and the public. “I do believe we’ve seen a change in attitude [from Facebook],” Warner said. “[But] I think their progress has been incremental in many ways.”
Warner published a policy paper last month outlining a number of ways he thinks technology companies can improve their election monitoring. Included among his ideas: Alerts so that users know if they’ve received a message from a bot and confirmation that people who list a location on their profile actually live in that location. “I recognize none of this is easy and that none of these solutions are one hundred percent comprehensive,” Warner said. “But I think particularly with this upcoming election, they would be important.”
“We’re very serious about this,” CEO Mark Zuckerberg told Recode last month. “We know that we need to get this right. We take that responsibility very seriously.”
The tough part is, we probably won’t be able to measure Facebook’s success around the midterms until it’s too late.