As one, if not the most influential social network when it comes to sharing and shaping public opinion, Facebook has enormous reach and with it, enormous responsibility. But in order to fully understand its transition from social network to Facebook fact checker, we have to take a step back to look at how it has evolved, as well as what lies ahead and how Facebook currently handles misinformation and disinformation as a fact-checking tool.
Facebook has evolved considerably from its early days as a college networking site. With the introduction of the News Feed in 2006, content could seamlessly flow to and from users and with it, so could misinformation. Its first encounter with misinformation on a large scale happened in the early 2010s with the Arab Spring.
With the platform being front and center as both a catalyst of change and disruption, it was easy to see how Facebook could be used for both good as well as nefarious purposes and how unchecked information could have real-world consequences.
The 2016 U.S. Presidential Election was a turning point for Facebook. From foreign interference to the spread of fake news to the alleged influence of Facebook’s algorithms in helping to shape voter opinions, it became the object of intense scrutiny. This scrutiny also highlighted the (urgent) need to implement some kind of fact checking mechanisms.
After 2016, Facebook learned from the experience and took a variety of steps to help curb the flow of misinformation and disinformation, including:
Third Party Fact-Checker Partnerships - Facebook began working with third-party (human) fact-checkers to help bring greater credibility to its efforts.
Improved Algorithms Against Possible Misinformation - Building on its past experiences, the company updated its algorithm to help identify and curb the spread of misinformation. Information flagged as false would appear lower in the feed while more accurate information would appear higher.
Related Articles and User Information Flagging - Facebook released tools that allowed users to flag suspicious content. It also introduced Related Articles to broaden the perspectives that users were exposed to, helping to prevent the social media equivalent of “living in a bubble”.
Although these steps were designed to help showcase the platform in a more credible light, the Cambridge Analytica scandal of 2018 where personal data was used to influence voter opinion and the accusations of bias from third-party fact-checkers significantly hampered the company’s attempts to portray itself as a reliable and reputable source for the truth.
In light of the COVID-19 pandemic and the levels of misinformation and disinformation reaching a fever pitch, Facebook (which has since rebranded itself to Meta) set up a dedicated COVID-19 information center to provide users with detailed, verified information about the virus. It also set up expanded fact-checking efforts to cover a variety of regions and languages.
Facebook understands its role as a prime source for reputable information and as a place where people gather. With this in mind, it has taken several steps to help shore up its fact-checking capabilities, including:
Third-Party Partnerships - Facebook expanded its third-party fact-checker to include organizations that are certified by the International Fact-Checking Network. These organizations review photos, videos and news stories to rate their accuracy.
User-Reporting - The company expanded their user reporting feature and actively encourages users to report stories. Once the stories are reported, they can then be reviewed by fact-checkers.
AI and Algorithms - Leveraging AI, machine learning and sophisticated algorithms, Facebook is able to detect potential misinformation and disinformation and prioritize it for review.
Transparency Tools - By introducing transparency tools, such as sharing where a group is based and “About this article”, users can get a little deeper context about the sources they encounter.
Have their efforts paid off? With ongoing efforts to support fact-checking and transparency, users are starting to become more informed about the news and information they consume. In addition, the work of fact checkers has helped to stem the tide with the spread of fake news and false information. However, its efforts are not without criticism. Some people argue that the platform is stifling certain voices or that fact-checkers themselves are biased. The line between helping to fight misinformation to the point where it seeps over into freedom of speech is a blurry one and more work is needed.
In addition to potential human bias affecting the fact-checkers and Facebook’s own algorithms accused of favoring certain viewpoints over others, there are still challenges that the company faces as a whole with regard to fact-checking, including:
The Sheer Volume and Scale of Information Being Shared - With BILLIONS of pieces of information shared on Facebook every single day, it’s impossible for people to review every single piece of content. That means that some disinformation and misinformation can still slip by (and spread).
Cultural and Linguistic Differences - A factual story in one cultural or linguistic context can be seen very differently in another. For example, during the 2020 election, political misinformation spread more widely and for far longer when it was shared in Spanish rather than in English. According to The Guardian around 70% of misinformation shared in English is flagged with a warning, compared to only about 30% flagged in Spanish. Spanish is the third most-spoken language in the world.
The Speed of Spread - The speed at which misinformation spreads means that fact-checkers (both human and AI) need to work quickly. Since thorough verification and research is often needed, they often can’t keep up with the sheer amount of content out there.
The volume of information and developments in technology aren’t slowing down anytime soon. That means that Facebook’s role in truth is even more crucial. There are several paths forward at this point in time, most notably building on what they’ve started. That means:
Greater Fact-Checking Collaboration - By expanding their network of third-party fact-checkers, Facebook can grow their fact-checking accuracy and help continue to build trust and credibility with the users that use their service.
More Emphasis on User Education - By educating users about misinformation and encouraging critical thinking, the platform can create a more informed and well-educated user base.
Ongoing Evaluation and Audits - The company as a whole needs to continually assess and course-correct both with the information it’s fact-checking, the technology it uses and the people and organizations it works with. By continually auditing its platform and processes it can position itself to continue to be effective even as new challenges crop up.
The bottom line is this: Facebook has acknowledged its responsibility as a source of information and truth in the digital age, and although its relationship with fact-checking has been a rocky one and not without its challenges, its initiatives are still worth noting. Platforms like Facebook continue to grow and spread and play a vital role in ensuring that truth and accuracy are at the front of all of the conversations we have, both online and offline. But as with any platform or social network, it’s important to remember that the responsibility doesn’t lie with Facebook alone to interpret all the details.
Part of the responsibility is on us as users and digital consumers of information in learning how to navigate the information we see and respond to it accordingly, developing critical thinking and critical judgment skills that let us see misinformation and disinformation for what they are when they appear.