The Misinformation Superhighway
In the 1990s the internet was regularly called the Information Superhighway, most notably by former Vice-President Al Gore. The internet was being discussed as a way to connect millions of people to others and anything they could imagine: shopping, encyclopedias, massive piles of data, even their friends and loved ones. While that still holds true today, some of the most popular corners of the internet have strayed from that definition. According to Statista, a company that specializes in consumer data, Facebook has over 2.7 billion monthly active users. A key component in keeping that number so high is user engagement, which can be defined as the interaction between a user and a mobile application. Facebook primarily profits by keeping its users engaged, then selling advertisements to businesses that want to get their products in front of consumers. With Facebook primarily focused on driving engagement, they appear to have lost sight of the negative effects that is having on their users.
One way users are being negatively affected is through misinformation. Misinformation has become a seemingly vital part of Facebook's ecosystem and a driving force in today's rampant polarization. A study by Avaaz, a non-profit organization that promotes global activism, shows that in relation to the ongoing pandemic, misinformation posts on Facebook had nearly four times the engagement than similar posts by organizations such as the World Health Organization (WHO) and Centers for Disease Control and Prevention (CDC). This amplified engagement of misinformation has directly aided in the spread of conspiracy theories and false data during the COVID-19 pandemic. In a joint survey conducted by the International Center for Journalists (ICFJ) and the Tow Center for Digital Journalism at Columbia University, journalists rated Facebook as the most prolific vector for disinformation. Additionally, four out of five respondents reported they encountered disinformation at least once a week. Regular citizens were rated as the top source of disinformation, followed closely by elected officials and attention-seeking trolls. It is clear that misleading information has become pervasive on Facebook and can be spread by just about anyone. It is becoming increasingly clear that the format and business model Facebook relies on has become problematic for its users. The truth is no longer the most appetizing content for Facebook or creators on the platform, causing creators to spread misinformation more frequently and creating a misinformed and polarized public.
Facebook claims to have a solution in place that will help reduce the spread of this misinformation, most notably fact-checking the posts and marking them with a warning label. However, as the Avaaz study shows, Facebook only marked 16% of the posts that contained misinformation. The other 84% that went through the fact-checking process were left unmarked. That is far too low a number and while reaching 100% may not be practical, striving for an inverted success rate of 84% would be a good start. Facebook's inability to label that 84% also leads to the larger amount of misinformation falling into the implied truth effect. The implied truth effect occurs when only a subset of content is labeled as misinformation. Only labeling that subset creates an implication that all unlabeled content is truthful. In this case, 84% of misinformation falls into the implied truth effect and allows users to believe it is genuine information.
Even if Facebook were able to capture and clearly mark all misinformation, this still may not be enough to curb the spread. A proposed solution from the Avaaz study is to show users a correction to any misinformation they have seen. This was shown to decrease those users' beliefs in the misinformation by an average of 50%. Using independent sources to fact-check and provide corrections will be critical in this exercise.
An additional measure that Facebook can enact is to tweak their algorithm to demote false content. Once identified, Facebook can warn the author of such content that demotion will happen if the behavior continues. Demoting false content has been shown to decrease its reach by up to 80%. If repeat offenses occur, Facebook has been known to remove such offenders from its platform altogether. As reported by Jack Nicas of The New York Times, Alex Jones was “deplatformed” in 2018 and saw traffic to his app and website cut nearly in half. While this “deplatforming” does reduce the spread of misinformation and may have a direct negative consequence on the content creator, it also breeds mistrust in the platform itself and can spark even more misinformation spread.
While the spread of misinformation is unlikely to end, platforms such as Facebook that enable its wide and rapid spread can do more to slow the virality. Today Facebook has a business interest in viral content with high user engagement. That should be revisited either by Facebook itself or a regulatory body. Tech companies such as Facebook have been known to gloat about their advances in AI and machine learning technologies. Perhaps leveraging those advances can assist them in rooting out misinformation. Further, Facebook and other social media companies should seriously consider the following mitigation tactics. First, rather than removing the misinformation content and its creators from the platform, Facebook works to better identify, mark, and notify its users of such content and creators. Secondly, that Facebook consider removing the Share button from any content published by repeat offenders of misinformation until they have a proven track record of reform. These actions, while likely not perfect or ideal, would inform the public and slow the spread of misinformation on social platforms. The world could then attempt a return to a more trustworthy internet rather than today's misinformation superhighway.