In VCE’s weekly news profile from February 10, 2020, one of my fellow editors explored the world of misinformation in the health industry – an incredibly prescient article. Now, just five months later, the world has been plunged into the COVID-19 pandemic, and medical misinformation runs even more dangerously rampant. This begs the question, do the platforms that host that information have a responsibility to regulate it?
On July 27th, a video began to spread like wildfire on both Facebook and Twitter, depicting a group of doctors in white lab coats holding a press conference in which they tout the curative properties of hydroxychloroquine for use in preventing and treating COVID-19. This video, originally posted on right-wing news site Breitbart, matriculated to social media and was shared by many high profile public figures, including President Donald Trump, and his son, Donald Jr.
The video itself was filled with blatant and often dangerous misinformation, including the claims that hydroxychloroquine can cure or prevent COVID-19 (there is no scientific evidence of this), and that hydroxychloroquine is safer than common over the counter drugs (it has many serious side effects, which is why it is only available with a prescription). Most of the doctors in the video have a history of pushing hydroxychloroquine as a cure for COVID-19 and many have encouraged the decrease of safety regulations, stating that the disease isn’t serious enough to merit stay-at-home orders or the closing of schools.
Although both Facebook and Twitter have added a clause to their terms of service banning misinformation about COVID-19, the video managed to accumulate over 17 million views on various platforms before Facebook, then Twitter and Youtube, took the video down. Breitbart’s own Twitter account, as well as some sharers including Donald Trump Jr., had features limited for violating the rules.
Facebook claims it has removed over 7 million pieces of misinformation about COVID-19 from the site between April and June, but onlookers have questioned their several-hour response time given the virality of this video. The social media giant has stated that they are investigating why it took so long for the video to be flagged, but that it was not related to Facebook’s newsworthiness policy.
Facebook and Twitter have come under fire from both directions, as they are criticized both for taking too long to take down the video and for taking it down at all. At what point does a claim become misinformation? Do social media sites have a responsibility to censor misinformation, and if so, who decides what qualifies? The coronavirus pandemic may represent an extreme situation in which misinformation leads directly to lives lost, but should these policies continue past the age of COVID?