As one of the most influential social media companies when it comes to sharing and shaping public opinion, Facebook has enormous reach and with it, enormous responsibility. But in order to fully understand why its parent company, Meta, has decided to employ a Facebook fact checker on what was once a simple social networking site, we have to take a step back.
In this blog post, we're going to look at how the site has evolved, what lies ahead, and how Facebook has taken action on misinformation by employing independent fact-checkers. We'll also discuss what users can do to help combat misinformation on social media.
Mark Zuckerberg has done a lot to take Facebook from its early days as a college networking site to where it is now. Once a platform where students could send each other messages and keep in touch, it changed with the introduction of its News Feed in 2006. Now, content could seamlessly flow to and from users. But there was one problem: so could misinformation.
One of its most notable examples of Facebook's ability to spread newsworthy content on a large scale happened in the early 2010s with the Arab Spring. With the platform being front and center as both a catalyst of change and disruption, it was easy to see how Facebook could be used for both good as well as nefarious purposes. Not to mention, how unchecked information and misleading content could have real-world consequences.
The 2016 U.S. Presidential Election was a turning point for Facebook. From foreign interference to the spread of fake news and other problematic content to the alleged influence of Facebook's algorithms in helping to shape voter opinions, it became the object of intense scrutiny. There was an upside to this scrutiny, though - it highlighted the urgent need for fact-checked content.
After 2016, Facebook learned from the experience and implemented a new approach to content enforcement. This involved taking a variety of steps to help curb the flow of misinformation and disinformation, including:
Although these steps were designed to help showcase the platform in a more credible light, the Cambridge Analytica scandal of 2018 where personal data was used to influence voter opinion and the accusations of bias from third-party fact-checkers significantly hampered the company's attempts to portray itself as a reliable and reputable source for the truth.
In light of the COVID-19 pandemic and the levels of misinformation and disinformation reaching a fever pitch, Facebook (which has since rebranded itself to Meta) set up a dedicated COVID-19 information center to provide users with detailed, verified information about the virus. It also set up expanded fact-checking efforts to cover a variety of regions and languages.
Facebook understands that it's more than just a social networking site. For many people, it's their go-to source for newsworthy content. So, to help avoid the spread of misinformation and ensure reputable, quality content, Facebook has taken several steps to help shore up its fact-checking capabilities.
This is how the program works:
Whether fact-checkers find it on their own or users flag it themselves, the first step in their fact-checking process is to identify potential sources of misinformation. It then gets passed on to a fact-checking team.
These independent fact-checkers will then review the content for accuracy. This can do this in a variety of ways, from calling sources to checking it against public data.
This is arguably one of the most important parts of the process. Meta will label fact-checked content, and provide additional context if necessary. And as an extra measure, they will even reach out to those before or after they have shared content to notify them of the same thing.
To limit the reach of any content that contains misinformation, it will face penalties. For example, it will appear lower on users' Facebook Feeds. And if someone is found to be repeatedly posting false content, they may also face monetization and advertising penalties against their account.
So, has their approach to misinformation paid off? Well, with ongoing efforts to support fact-checking and transparency, it seems that users are starting to become more informed about the news and information they consume.
But that doesn't mean that Facebook hasn't been running into some challenges.
While they have been taking steps in the right direction, Facebook still faces some significant challenges when it comes to its fact-checking process.
Some people argue that the platform is stifling certain voices or that fact-checkers themselves are biased. The line between helping to fight misinformation to the point where it seeps over into freedom of speech is a blurry one, and more work is needed.
With BILLIONS of pieces of information shared on Facebook every single day, it's impossible for people to review every single piece of content. That means that some disinformation and misinformation can still slip by (and spread).
A factual story in one cultural or linguistic context can be seen very differently in another. For example, during the 2020 election, political misinformation spread more widely and for far longer when it was shared in Spanish rather than in English.
But why? Well, according to The Guardian around 70% of misinformation shared in English is flagged with a warning, compared to only about 30% flagged in Spanish. Spanish is the third most-spoken language in the world, so this is a huge oversight. Since tools like multilingual AI detectors do exist, Facebook needs to step it up if they're going to combat misinformation effectively.
While Meta has penalties in place for those sharing misinformation, they may not be enough of a deterrent. One study suggests that having a piece of content fact-checked or deleting it from the account in question isn't preventing people from posting false news.
Sure, they implement more severe penalties for repeat offenders, but this doesn't completely solve the problem. Since misinformation can spread so quickly on social media, Facebook still needs to figure out how to effectively penalize even minor, isolated cases before they turn into major problems.
The volume of information and developments in technology aren't slowing down anytime soon. That means that Facebook's role in truth is even more crucial.
There are several paths forward at this point in time, most notably building on what they've started. That means:
Until Facebook steps up its fact-checking game, users should take steps to combat misinformation on their own. While it can be harmless, forming an opinion based on inaccurate content can also be dangerous, so Facebook and other social media users need to protect themselves from the potential consequences.
There is so much to look at when scrolling through a Facebook feed that it can be tempting to just skim through content to get to the next post. You'll save some time, sure, but you may also be missing some important context.
So, the next time a piece of content catches your eye on Facebook, be sure to read the whole post before forming an opinion - and especially before sharing it with others.
If some content seems unlikely or completely out of left field, then consider the source. Is the individual or company behind the post trustworthy? Or are they known to have certain biases or agendas? If you suspect that it could be a result of the latter, then it's a good idea to verify this information against other sources.
You can consult other posts on Facebook, books, magazines, Google, or another search engine of choice. Yes, it can take some time, but by sifting through other reputable sources, you should be able to get a better idea of whether or not you have some misinformation on your hands.
The easiest, quickest way to verify information on Facebook and beyond is to use a fact checker, like the one from Originality.AI. All you need to do is input the content to be checked, press the "Scan Now" button, and review the results.
Not only will the tool tell you whether the facts are potentially true or false, but it will also give you additional context and sources on the subject. This can help you find and understand the truth behind the content's claims, and make your own posts on the subject that much more honest and reputable.
The bottom line is this: Facebook has acknowledged its responsibility as a source of information and truth in the digital age. Although its relationship with fact-checking has been a rocky one, its initiatives are still worth noting. Platforms like Facebook continue to grow and spread and play a vital role in ensuring that truth and accuracy are at the front of all of the conversations we have, both online and offline. But as with any platform or social network, it's important to remember that the responsibility doesn't lie with Facebook alone to interpret all the details.
Part of the responsibility is on us as users and digital consumers of information. We need to learn how to navigate the information we see and respond to it accordingly. Fortunately, with the development of fact checking tools, this has never been easier.
So, the next time you come across some questionable content on Facebook, make sure to verify the information before you share it. With social media companies like Facebook and users taking steps to combat misinformation, we can avoid the pitfalls of inaccurate content and encourage more quality posts.