AI Studies

Do AI Detectors Work? Open AI Says No - Prove It for Charity

Do AI Detectors Work? OpenAI says “in short no” but the reality is that, although they are not perfect, they do “work” and if you think they don’t we have a challenge for you that will benefit charity!

Do AI Detectors Work? OpenAI says “in short no” but the reality is that, although they are not perfect, they do “work” and if you think they don’t we have a challenge for you that will benefit charity!

The usefulness of AI detectors depends on your use case, if 95%+ AI detection accuracy is enough then they definitely “work”. However, if perfection is required then they are not the right tool for you. They will never be “perfect” and for this reason, we have always recommended against them being used for academic discipline. 

But we believe the societal costs that would result from undetectable AI-generated text are too significant for over-simplified comments stating AI detectors don’t work, such as OpenAI’s above, backed up with no data.  

So…

We are issuing a “Do AI Detectors Work” Challenge for Charity and specifically asking OpenAI to participate and stand behind their statement “Do AI Detectors work? In short, no”!

Do Ai Detectors works

If you also think AI detection tools do not work then here is your chance to expose them and benefit a good cause!

Announcing The Do AI Detectors Work Challenge for Charity

The idea is that we will run a dataset, newly created for this test, of AI-generated text and human-written text through Originality.ai.

For each piece of writing Originality incorrectly identifies we will donate to charity, for each one Originality.ai predicts correctly the challenger will donate to charity. 

Interested? Contact Us and let’s set it up!

$10k Benefit for Charity:

  • Each Prediction Originality.ai Gets Right - You Donate $1 to Charity
  • Each Prediction Originality.ai Gets Wrong - We Donate $1 to Charity

What is a Correct or Incorrect Prediction:

When text is entered into Originality you receive a probability that the text is Original (human-created) and the probability that the text is AI (AI-written). 

If the text was human-written and Originality.ai identifies a greater than 50% chance that it is Original that is a correct prediction. 

So either human-created or AI-created text can be entered and Originality can predict if the content was Original or AI-generated so it produces 4 possible outcomes. 

  • True Positive: When AI-written text is entered AND Originality.ai correctly identifies it as AI with an over 50% AI prediction.
  • False Negative: When AI-written text is entered BUT Originality.ai incorrectly identifies the text as Original with an over 50% Original prediction.
  • False Positive: When Human-Written text is entered BUT Originality.ai incorrectly identifies the text as AI-generated with an over 50% AI prediction. We understand when this happens it can be very painful!
  • True Negative: When Human-Written text is entered AND Originality.ai correctly identifies it as Original with an over 50% Original prediction.

Do AI Detectors Work Challenge Rules…

If you want to run a modified version (smaller data set etc.) we are open to that, contact us and we would be happy to work with anyone to set up a fair test!

Here is our idea…

10,000 Text Record Dataset Tested

  • 10,000 text records - 100 - 2000 words long
  • 5000 Human-Written Content Pieces: Long Form Informational Content Written for the Majority of the Population. Example: Informational content meant to be published online (blog posts) or in a book for consumption by the average person. No unique text such as complex academic writing, legal documents etc. Published from ~2015 until 2019 (before GPT-2 launched).
  • 5000 AI-Written Content Pieces: AI pieces created by prompting ChatGPT to write an article of the same title from the human dataset… “Write an article (human dataset title n)”
  • Charity - Mutually agreed upon charity such as SickKids
  • $1 donated for each prediction
  • Who Runs/Oversees the Test? Anyone we mutually agree on such as a Journalist.

If You Think AI Detectors Are BS Now is Your Chance to Prove It…

If you think AI detectors do not work and want to take us up on this challenge then please contact us

Plus it is for a good cause!

This is an invitation for OpenAI and we would be happy to have others join in on the fun and benefit charity.

Our hope is to also provide some understanding of the effectiveness of AI detectors and their limitations!

Below we will provide some additional context on how AI content checkers work and their limitations based on our testing. 

How Do AI Detectors Work and Their Limitations

How do AI Detectors Work?

AI detectors may each use some combination of these detection models…

  1. “Bag of Word” Detectors - They attempt to identify the common word or speech patterns with AI writing produced by popular language models. Such as a perplexity or the burstiness score. This approach struggles beyond GPT-3 or newer large language model AI-generated content.
  2. Zero-Shot LLM Approach - These detectors use a large language model LLM (similar to ChatGPT) and basically look at how similar the text is to the text it would have written itself. 
  3. Find Tuned AI Model - These detectors use an AI and provide it with both human and AI-generated content to ultimately let it learn the differences between human and AI writing. 

Further Reading:

How Accurate Are AI Detectors

We have an incredibly in-depth study that looks at our and other companies' AI detector accuracy. But in short, not all detectors are the same and their usefulness depends on your use-case/dataset. 

The accuracy claims made by any detector that doesn’t provide data to support that claim should be taken with a grain of salt. 

No detector is perfect and all will have some % of false positives (a false positive is when the detector incorrectly thinks a human-written article was AI-generated) and false negatives (the detector incorrectly thinks an AI-written article was human). 

Here is the accuracy of Originality.ai on GPT-4 generated content (99%+ accurate) with a 1.5% false positive.

Accuracy of Originality.ai on GPT-4 generated content

Here are the results of testing several AI detectors on an adversarial dataset (dataset of AI-generated content meant to be undetectable) that we have open-sourced.

Testing several AI detectors on an adversarial dataset

Full results including the dataset used can be checked here:

https://originality.ai/blog/ai-content-detection-accuracy

Additionally at the link above you will find an AI detector efficacy analysis tool we open-sourced that will allow you to run your own dataset across multiple detectors to determine their effectiveness for your use case. This tool includes a full statistical analysis including a confusion matrix and reporting the F1, Precision, Recall (True Positive Rate), Specificity (True Negative Rate), False Positive Rate, and Accuracy. 

Are AI Content Detectors Reliable Enough for Use in Academia?

We do not like and have always recommended against the use of Originality.ai, or any AI detector, being used for academic dishonesty and disciplinary actions. 

A lot of the confusion related to AI detectors stems from people looking to use them the same way that plagiarism checkers are used. 

Plagiarism checkers are able to provide an enforceable level of proof that text was copied. AI detectors can not do that. 

For this reason, we are not supporters of academic discipline being issued based solely on the results of an AI detector. 

How Do You Bypass AI Content Detectors?

If it were possible to bypass a detector easily then would they still serve a purpose? I think we would agree that they would be less useful. But the reality is that many of the bypassing methods that previously produced useful content are no longer effective. Now the reality is that writing tools that make AI-generated content undetectable produce output quality that is simply not useful for most people. If you then correct the grammar and odd word choices it typically becomes detectable again. 

Here is a guide that looks at the different strategies people use to make undetectable AI content.

Quillbot or other paraphrasing tools used to be effective at bypassing Originality and other AI content detection tools but that is no longer an effective strategy at Originality.ai. See results for Quillbot and Paraphrasing Detection.

How to Read the AI Detection Score? 

A score of 40% AI and 60% Original means the detector thinks there is a 60% chance the piece of content was written by a human and a 40% chance it was written by a machine. 

It does not mean that 40% of the article is AI-generated and 60% is human-generated text. Although it can be very frustrating when you see a score of 40% AI on a piece of content that you know you wrote 100% yourself, this is not a false positive. 

Do AI Detectors Work Summary:

To understand if an AI content detector will “work” for your use case you first need to understand what level of accuracy you require. 

If you require perfection then AI detectors will not work. 

If over 95% accuracy and under 5% false positives is acceptable for your use case and the alternative of having no idea if content was AI or human generated then they will work for you.

To understand the exact efficacy of detectors for your use case you can use the open-sourced tool we provide and test your own dataset - https://originality.ai/blog/ai-content-detection-accuracy

If you think AI-generated text detectors do not work we offer you the chance to prove it and benefit charity! Please Contact Us!

Originality.ai challenge Chatgpt or anyone on ai detectors works capability

Jonathan Gillham

Founder / CEO of Originality.ai I have been involved in the SEO and Content Marketing world for over a decade. My career started with a portfolio of content sites, recently I sold 2 content marketing agencies and I am the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences I understand what web publishers need when it comes to verifying content is original. I am not For or Against AI content, I think it has a place in everyones content strategy. However, I believe you as the publisher should be the one making the decision on when to use AI content. Our Originality checking tool has been built with serious web publishers in mind!

More From The Blog

AI Content Detector & Plagiarism Checker for Serious Content Publishers

Improve your content quality by accurately detecting duplicate content and artificially generated text.