It’s 2020, which means that just like four years ago, we’re again thrust into a massive, messy morass of politics every time we dare to glance at the news or venture onto a social media platform. And just like in 2016, not all of the stories in that sea of information are entirely accurate — fake news abounds, and according to a MIT study from last year, not only is it more likely to go viral than its real counterpart, a fake story can reach 1,500 people six times faster than an accurate one.
If you haven’t yet heard the details of the terrifyingly true effect fake news had on the last presidential election, the essence is that groups based in foreign countries, most notably Russia, used social media platforms to spread fabricated news stories in an attempt to influence the 2016 election in Donald Trump’s favor. Specifically, a 400-employee “web brigade” called the Internet Research Agency was reportedly based in St. Petersburg and used Facebook and Twitter to do just that. Also during the lead-up to that election, Cambridge Analytica covertly gathered the personal information of 87 million Facebook users, and used it to target specific people with fake content that they would be more likely to believe. In the intervening years, as the scandals unraveled, researchers began to really look into the impact fake news has, on both people’s beliefs and political campaigns.
One such study published last year during a real political campaign has confirmed the potent effect of encountering fake news in the weeks leading up to a vote: during the week before Ireland’s 2018 referendum on abortion, study participants were shown real and fake news stories relating to scandals in each side’s campaign. Nearly 50% reported a memory of a fake scandal happening after being shown a fake news story about it; more than a third said they had a “specific” memory of it having happened. And people were more likely to “remember” fake scandals for the campaign they opposed, especially if they had lower cognitive ability. Even when participants were warned afterwards that they might have been shown fake news, the rates of false memories weren’t entirely reduced. So, it seems even being aware of the phenomenon is not enough to prevent being fooled by a fake story, especially one you might want to believe.
In anticipation of this year’s election, social media companies have attempted to crack down on political fake news: Twitter has banned political ads and Google has limited them on YouTube (although I have personally seen at least a dozen Mike Bloomberg ads on it over the last month), and Facebook has rolled out new features for curbing the spread of false content. But it’s a bit like a game of whack-a-mole, and it’s almost impossible to catch everything. Also, news stories aren’t the only things being faked; edited videos and ones generated by AI showing people saying things they’ve never actually said (called deepfakes) have sprouted up everywhere. For example, an edited video of Speaker of the House Nancy Pelosi that made her appear drunk went viral and caused right-wing media to question her ability to hold office. YouTube removed the video, but Facebook didn’t (though eventually they included a warning label).
Incidents like this raise questions about the extent to which social media companies are responsible for the content posted on their platforms — where is line be drawn on what should be taken down? When does the right to free speech protect fake news, and when is it harmful or defamatory enough to ethically require removal? And, with recent confirmation from U.S. Intelligence that Russia is again meddling in this year’s election, to what extent are we personally ethically responsible for the veracity of what we believe and what we share?