By Jermy Kahn
In addition to testing American democracy, November’s election and the subsequent storming of the U.S. Capitol put social media to the test. Facebook and its rivals have spent years creating technology to combat the spread of disinformation, violent rhetoric, and hate speech. By some measure, the systems did better than ever in filtering out hundreds of millions of inflammatory posts. But ultimately the technology failed, allowing many similar posts to slip through.
In the days leading up to the election, unsubstantiated claims of widespread voting irregularities were the most shared content on Facebook, according to data analytics company CrowdTangle. At the top of the list were then-President Donald Trump’s posts falsely claiming there had been thousands of “fake votes” in Nevada and that he had won Georgia. Meanwhile, the top news stories on Facebook preceding the election were from far-right news sites such as Breitbart and Newsmax that played up specious voter fraud claims. Such falsehoods set the stage for the Capitol’s storming.
No company has been as vocal a champion of using artificial intelligence to police content as Facebook. CEO Mark Zuckerberg has repeatedly said, as he did in 2018 congressional testimony, that “over the long term, building A.I. tools is going to be the scalable way to identify and root out most of this harmful content.”