On January 7, 2025, Meta, the parent company of Facebook, announced the end of its third-party fact-checking program.
As a result, Meta is moving to more of a community-oriented process for fact-checking. What does this ultimately mean for content professionals? Let’s take a closer look.
Streamline your editorial process with Originality.ai’s best-in-class suite of tools including fact-checking, AI detection, and plagiarism checking.
Originally Meta’s fact-checking process started in 2016 as an effort to curb misinformation and fake news on Facebook and Instagram.
As part of the fact-checking process, Meta began working with independent third-party fact-checking organizations certified by the IFCN or International Fact-Checking Network.
That meant that potentially misleading content would be sent to third-party fact-checking organizations for review.
Meta partnered with a number of well-known third-party fact-checkers around the world including Reuters and PolitiFac.
Fact-checkers would then use a standardized system to rate the flagged content as follows:
If the content was rated as False, Partly False, Altered, or Missing Context, Meta’s algorithms would severely reduce its distribution across the network.
Users who tried to share the content would also get a notification that warned them that the content had been fact-checked and was rated as false.
However, fact-checkers did not remove content — Meta only removed content that went against their Community Standards policies.
Pages and accounts that continually posted false information could ultimately face harsher restrictions such as a more limited reach or losing their ability to monetize or advertise on the social network.
The fact-checks would also include a link to the source that debunked the shared information. If users tried to share false content, they’d be encouraged to read the fact-check details before they shared. Sometimes, Meta would add related articles below the flagged post for more information.
Rather than using independent fact-checking third parties, Meta has turned to crowdsourced content moderation. With Community Notes users chime in as to whether or not a post is misleading or lacks clarity, with the aim of transparency. It is similar to a feature already available on X (formerly Twitter).
Community notes lets users add additional context to posts or photos, such as highlighting inaccuracies, offering clarifications or filling in missing details.
In the similar feature available through X users can then vote on whether or not the note is helpful.
In order for the note to be publicly visible, it has to reach a consensus across users with varying perspectives. This is designed to prevent any one group from steering the conversation and open the floor to a variety of opinions and perspectives.
Meta’s aim is that this approach will help give posts additional context so that users can take better control of how and where they get their information and make informed decisions.
Meta plans to roll out Community Notes gradually, before expanding.
No matter what you’re fact-checking, taking the time to follow best practices makes content more credible, transparent, and accurate. Here are some best practices:
First, check the credibility of the original source. Where did the information originally come from?
Make sure it’s a reputable and known source like a college or university, a government institution, or an organization (look for websites with domains ending in .edu, .gov or .org).
Be careful to trust any information that’s anonymous or comes from an “unverified source.”
Learn more about fact-checking websites.
What other information is out there to support or disprove the claim?
Find at least two (or more) independent sources that confirm the same information. Prioritize official reports, studies, research, or interviews over secondary sources like opinions or commentary.
If you’re looking into specific research, be sure to check the methodology, any limitations noted in the research, and the sample size as well.
Disinformation and misinformation often have a number of red flags. Watch out for:
Look into the author of the content. Are they a recognized authority or thought leader in the space? Do they have any conflicts of interest or biases that could affect how they’re presenting the information?
For instance, authors following best practices for transparency may disclose connections with companies or organizations they may be working with, to avoid misleading audiences.
Reviewing posts on social media and online for authenticity is an important aspect of media literacy that can help to avoid misinformation.
Meta is now shifting their approach from independent third-party fact-checking to Community Notes as a way to review information posted on their social platforms.
If you are looking to further review content, Originality.ai offers best-in-class tools for the editorial process. Try the Originality.ai Fact-Checker to review text and cross-reference information with sources.
Learn more about fact-checking in our top guides:
Learn how Google is taking on misinformation with Google Fact Check, harnessing its influence to separate fact from fiction in the digital realm. Explore its role in countering disinformation campaigns for a more reliable online experience.