Fact Checking

8 Times AI Hallucinations or Factual Errors Caused Serious Problems

Guarding Against Misinformation: The Impact of AI-Generated 'Facts.' Learn how to discern accurate information in the age of generative AI. Explore the potential pitfalls when AI delivers 'facts' that may be more artificial than intelligent in your everyday work.

Everyday you read claims and might need to cite information in your work. But how can you know that information is accurate? The rise of generative AI adds a new potential source of misinformation. What can happen when AI programs give you “facts” that are more artificial than intelligent?

In this article we look at times that the use of generative AI (such as ChatGPT) led to serious problems for the user because the AI hallucinated or provided factually inaccurate information.

How Mistakes Are Made: Examples of Real-World Errors

In the past year, there have been high-profile examples of false AI-generated information causing embarrassment, possible injury, and, in one case, the threat of legal sanctions.

1. Microsoft Travel Article Lists a Food Bank as a Destination

Hallucination or Error

AI-generated writing was suspected when Microsoft Start’s travel pages published a guide of places to visit in the Canadian capital of Ottawa. While there were errors in details about some locations, most of the commentary about the article was about how it included the Ottawa Food Bank as a “tourist hotspot,” encouraging readers to visit on “an empty stomach.” 

Consequence 

Higher awareness of the 50 reporters recently laid off due to increased use of generative AI for Microsoft News’s articles, public embarrassment, weakened trust

Fact Checking Solution 

Originality.ai’s Fact Checker flagged the claim as inaccurate and inappropriate.

Originality.AI’s Fact Checker flagged the claim as inaccurate and inappropriate

2. Teacher falsely accuses entire class of using ChatGPT

Hallucination or Error 

A Texas A&M University-Commerce teacher gave his entire class a grade of "Incomplete" because when he asked ChatGPT if the students' final essays were AI-generated, the tool told him they all were, even though detecting such text is outside 

ChatGPT's abilities or intended use.

Hallucination or Errors

Consequence 

Students protested that they were innocent, and the university investigated both the students and the teacher. The university has issued a number of policies in response.

Fact Checking Solution

The Fact Checker notes some of ChatGPT’s limitations that suggest the teacher’s use of it is inappropriate.

Fact Checking Solution

3. Google Bard Makes Error on First Public Demo

Hallucination or Error

In February, Google’s Bard AI Google found out how its own Bard generative AI could produce errors in the program’s first public demo, where Bard stated that the James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system,” when the first such photo was taken 16 years before the JWST was launched.

Google Bard Makes Error on First Public Demo

Consequence 

Once the error became known, Google’s stock price lost as much as 7.7%, or $100-billion, in the next day of trading.

4. Microsoft’s Bing Chat Misstates Financial Data

Hallucination or Error 

The day after Bard debuted, Microsoft’s Bing Chat A.I. gave a similar public demo, complete with factual errors. Bing Chat gave inaccurate figures about the Gap’s recent earnings report and Lululemon’s financial data.

Microsoft’s Bing Chat Misstates Financial Data

Consequence

Public embarrassment, weakened trust

Fact Checking Solution

Originality.ai Fact Checking Solution

5. Lawyer Uses ChatGPT to Cite Made Up Legal Precedents

Hallucination or Error

ChatGPT invented a number of court cases to be used as legal precedents in a legal brief Steven A. Schwartz submitted in a case. The judge tried to find the cited cases, but found they did not exist.

Consequence

Schwartz, another lawyer, and his law firm were fined $5,000 by the court. As his legal team noted, “Mr. Schwartz and the Firm have already become the poster children for the perils of dabbling with new technology; their lesson has been learned.”

6. Bard and Bing Chat Claim There Is a Ceasefire in the Israel-Hamas Conflict 

Hallucination or Error 

A Bloomberg reporter tested both Bard and Bing Chat about the current conflict between Israel and Gaza, and both falsely claimed a ceasefire had been declared, likely based on news from May 2023. When the reporter asked a follow-up question, Bard did backtrack, saying, “No, I am not sure that is right. I apologize for my previous response,” but also made up casualty numbers for two days into the future.

Consequence 

Public embarrassment, weakened trust

7. Amazon Sells Mushroom Foraging Guides with Errors

Hallucination or Error 

Amazon’s Kindle Direct Publishing sold likely AI-written guides to foraging for edible mushrooms. One e-book encouraged gathering and eating species that are protected by law. Another mushroom guide had instructions at odds with accepted best practices to identify mushrooms that are safe to eat. 

Consequence 

Public embarrassment, weakened trust

8. Professor Uses ChatGPT to Generate Sources for Research

Hallucination or Error 

The Chronicle of Higher Education reported that a university librarian was asked to produce articles from a list of references a professor provided. When she concluded the articles did not exist, the professor revealed that ChatGPT had provided them. In academia, researchers are finding that generative AI understands the form of what a good reference should look like, but that doesn’t mean that the articles exist. ChatGPT can make up convincing references with coherent titles attached to authors prominent in the field of interest. Studies by the National Institutes of Health have found that up to 47% of ChatGPT references are inaccurate.

Consequence 

Public embarrassment, loss of trust, loss of potential market.

Check Your Facts

The Fact Checker app offers a system to assess if a claim is potentially false. Fact Checker highlights individual passages, and then provides links to sources that support or counter a claim in the passage. It will tell you the likelihood that a statement is true or false. Future public embarrassments and legal troubles could be avoided with some diligence. AI has the potential to aid many tasks, but users need to understand its limitations and potential pitfalls. Originality.ai has the tools to let you check for accuracy, plagiarism, and the likelihood of AI-generated text in documents.

Jonathan Gillham

Founder / CEO of Originality.ai I have been involved in the SEO and Content Marketing world for over a decade. My career started with a portfolio of content sites, recently I sold 2 content marketing agencies and I am the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences I understand what web publishers need when it comes to verifying content is original. I am not For or Against AI content, I think it has a place in everyones content strategy. However, I believe you as the publisher should be the one making the decision on when to use AI content. Our Originality checking tool has been built with serious web publishers in mind!

More From The Blog

AI Content Detector & Plagiarism Checker for Marketers and Writers

Use our leading tools to ensure you can hit publish with integrity!