The Most Accurate AI Content Detector
Try Our AI Detector
AI Studies

Are AI Checkers Biased Against Non-Native English Speakers? A Response to a Flawed Stanford Study

Flawed study claims AI checkers are biased? Originality.AI debunks claims and offers solutions.

Trusted By Industry Leaders
Trusted By Industry Leaders

Introduction

Our text compare tool is a fantastic, lightweight tool that provides plagiarism checks between two documents. Whether you are a student, blogger or publisher, this tool offers a great solution to detect and compare similarities between any two pieces of text. In this article, I will discuss the different ways to use the tool, the primary features of the tool and who this tool is for. There is an FAQ at the bottom if you run into any issues when trying to use the tool.

What makes Originality.ai’s text comparison tool stand out?

Keyword density helper – This tool comes with a built-in keyword density helper in some ways similar to the likes of SurferSEO or MarketMuse the difference being, ours is free! This feature shows the user the frequency of single or two word keywords in a document, meaning you can easily compare an article you have written against a competitor to see the major differences in keyword densities. This is especially useful for SEO’s who are looking to optimize their blog content for search engines and improve the blog’s visibility.

Ways to compare

File compare – Text comparison between files is a breeze with our tool. Simply select the files you would like to compare, hit “Upload” and our tool will automatically insert the content into the text area, then simply hit “Compare” and let our tool show you where the differences in the text are. By uploading a file, you can still check the keyword density in your content.

URL compare

Comparing text between URLs is effortless with our tool. Simply paste the URL you would like to get the content from (in our example we use a fantastic blog post by Sherice Jacob found here) hit “Submit URL” and our tool will automatically retrieve the contents of the page and paste it into the text area, then simply click “Compare” and let our tool highlight the difference between the URLs. This feature is especially useful for checking keyword density between pages!

Simple text compare

You can also easily compare text by copying and pasting it into each field, as demonstrated below.

Features of Originality.ai’s Text Compare Tool

Ease of use

Our text compare tool is created with the user in mind, it is designed to be accessible to everyone. Our tool allows users to upload files or enter a URL to extract text, this along with the lightweight design ensures a seamless experience. The interface is simple and straightforward, making it easy for users to compare text and detect the diff.

Multiple text file format support

Our tool provides support for a variety of different text files and microsoft word formats including pdf file, .docx, .odt, .doc, and .txt, giving users the ability to compare text from different sources with ease. This makes it a great solution for students, bloggers, and publishers who are looking for file comparison in different formats.

Protects intellectual property

Our text comparison tool helps you protect your intellectual property and helps prevent plagiarism. This tool provides an accurate comparison of texts, making it easy to ensure that your work is original and not copied from other sources. Our tool is a valuable resource for anyone looking to maintain the originality of their content.

User Data Privacy

Our text compare tool is secure and protects user data privacy. No data is ever saved to the tool, the users’ text is only scanned and pasted into the tool’s text area. This makes certain that users can use our tool with confidence, knowing their data is safe and secure.

Compatibility

Our text comparison tool is designed to work seamlessly across all size devices, ensuring maximum compatibility no matter your screen size. Whether you are using a large desktop monitor, a small laptop, a tablet or a smartphone, this tool adjusts to your screen size. This means that users can compare texts and detect the diff anywhere without the need for specialized hardware or software. This level of accessibility makes it an ideal solution for students or bloggers who value the originality of their work and need to compare text online anywhere at any time.

This study looks at a flawed Stanford Study that incorrectly showed that AI checkers are biased against non-native english speakers.

AI content generation is, without a doubt disrupting and revolutionizing entire industries online. Many experts, professors and publishers rely on AI content checkers in order to distinguish between what is human-written and what’s written by AI both online and academically. 

With the launch and ongoing development of groundbreaking tools like Originality.AI, there are going to be numerous case studies conducted to evaluate the efficacy of these types of tools – as there should be. 

In fact, we welcome any and all AI detection accuracy tests to further help educate users on the limitations of AI checkers and how to properly use them. However, the process in which they are conducted can sometimes be questionable.

We have in fact made and open-sourced an AI detector efficacy research tool to let anyone evaluate AI checkers efficacy on their own dataset. 

A study examining the possible biases of AI content detectors against non-native English speakers was published by Stanford scholars. 

After reviewing the results, we noticed a number of flaws that needed to be addressed - especially because this study was being referenced by multiple sources to falsely claim that AI checkers are biased.

We decided to conduct our own study with a larger and more thorough publicly available dataset. Below are our findings. 

Overview of the Stanford Study

In July 2023, a Stanford paper was released discussing the rapid growth of generative language models like ChatGPT and their potential risks, including the spread of fake content and cheating. This was in response to many educators being concerned about their ability to detect AI in students' work due to accuracy issues. 

In the study, the authors stated that AI checkers exhibit bias against non-native English speakers, misclassifying their writing as AI-generated. Their data showed that while detectors were able to accurately classify US student essays, they incorrectly labeled more than half of TOEFL essays as AI-generated, with an average false positive rate of 61.3%. 

This bias may lead to unintended consequences, like the marginalization of non-native speakers. Most AI content detectors rely on measures like text perplexity, which can be influenced by linguistic variability.

To mitigate bias, the authors enhanced the vocabulary of non-native writing samples. They also discovered that AI checkers could be easily bypassed by manipulating prompts, which also raised questions about their effectiveness.

The implications of bias in AI checkers include potential discrimination on social media, limitations for non-native researchers, and false accusations in education. The study suggests caution in using these detectors in evaluative settings, comprehensive evaluation with diverse samples, and inclusive conversations to define acceptable use.

Overall, the Stanford study calls for developing more robust and fair AI checkers, emphasizing inclusivity and trust, and engaging all stakeholders in defining ethical AI usage in various contexts (Source). These are worthy ideals that Originality strives for with our content detection tool.

Major Flaws in the Research Conducted by the Stanford Study

The Stanford study received an overwhelming reaction by the education, writing, and AI community - that being that AI content detectors were not accurate and therefore could not be trusted to detect AI content. 

We reviewed the data ourselves and found multiple flaws with the study, which ultimately put their findings into question. Let's go over these one at a time. 

Flaw 1: Sample Size and Source

The first problem in the paper concerns the small sample size of 91 TOEFL essays that were taken from a student forum. There may be questions regarding the findings' generalizability if a small sample size was used. 

The range of non-native English writing may not be sufficiently reflected in TOEFL essays from a student forum. It's crucial to have a larger and more diverse dataset to draw more robust conclusions about the performance of AI checkers on non-native English writing.

Flaw 2: Comparison with 8th-Grade US Essays

The second flaw is the comparison against 8th-grade US essays. This introduces a significant confounding variable, as the age group and educational level of individuals completing TOEFL exams differ substantially from those in 8th grade. The writing styles, vocabulary, and linguistic complexity can vary widely between these two groups. 

This comparison might not accurately reflect the nuances of non-native English writing, as TOEFL essays are expected to adhere to a more advanced linguistic standard.

Flaw 3: Misclassification of GPT-4 Polished Articles

The third flaw involves the misclassification of "GPT-4 polished articles" as human-written content. This error calls into doubt the validity of the assessment procedure. 

The validity of the study's conclusions is called into question if the detectors are unable to reliably distinguish between information that is generated by GPT and content that is authored by humans. The validity of assertions about bias against non-native English speakers is affected by sample misclassification.

Flaw 4: Lack of Updates on Detector Performance

The fourth flaw is the absence of updated information on the current performance of detectors. In light of the swift progress being made in AI technology, particularly in language models, it is imperative to present current findings to guarantee the pertinence and precision of the research outcomes. 

The Stanford study does not appropriately depict the present limitations and capabilities of AI checkers in identifying AI-generated content in the absence of up-to-date data. Below is an example of why this is true.

The Stanford case study was based on version 1.1 of Originality.AI. We have made significant changes to our AI checker. See version history and increases in accuracy here - https://originality.ai/blog/ai-content-detection-accuracy 

We ran our tool against this exact data set and feel that the case study should be modified to reflect that the latest Originality.AI Model (1.4) detected AI-Written Content at 100% for all AI data sets. 

Had our latest model been used for the study, the charts would have looked MUCH different:

Originality.ai Correctly Classifies GPT4 Polished Essays as 100 Percent Ai Generated
Chatgpt3 Generated College Essay Correctly Classified as Ai Generated By Originality.ai
Chatgpt3.5 Generated Science Abstract Correctly Classified as Ai Generated By Originality.ai

The identified flaws highlight potential limitations in the methodology and execution of the study. While the paper claims bias against non-native English authors based on misclassification rates, these flaws suggest the need for a more rigorous and comprehensive analysis. AI checker bias is a serious issue that has to be carefully considered and researched, particularly in regard to bias against non-native English speakers.

Additional research needs to be done to address the points raised in order to guarantee the validity and dependability of its conclusions and successfully address this problem.  Additionally, a more inclusive approach to dataset selection, accurate detector evaluation, and ongoing updates on performance can contribute to a more nuanced understanding of bias in AI checkers. 

Originality.AI's Evaluation & Analysis

To address the flaws in the Stanford study, we conducted our own analysis, taking into consideration all of the points made above. Originality.AI's evaluation significantly outperforms the study conducted by the Stanford scholars in several key aspects, rendering the latter's findings invalid. Let's break down our approach and findings.

Extensive Dataset

Unlike the Stanford study, which used a small sample size of only 91 TOEFL essays from a student forum, Originality.AI utilized a much larger dataset of over 1,500 essay samples collected from Kaggle and other sources. This extensive dataset ensures a more comprehensive representation of non-native English writing, enhancing the reliability and generalizability of the findings. 

The table below shows the datasets used by Originality.AI with brief information about each.

Extensive Datasets Used By Originality.ai with brief information

Apples-to-Apples Comparison 

The Stanford study compared TOEFL essays against 8th-grade US essays, introducing a confounding variable due to differences in age, education level, and writing standards. In contrast, Originality.AI ensured a fair comparison by using similar IELTS essays. This approach allows for a more accurate assessment of detector performance across different writing styles and linguistic complexities.

Originality.AI incorporated an AI content scoring system to analyze the combined IELTS essays. Here's a breakdown of the steps involved:

Steps Involved in Originality.ai's Content Scoring System To Analyze the IELTS Essays

This process provided  insights into the authenticity of the essays using AI content scoring and visualizing the performance of the classification model as confusion matrix

The Results

The application of a confusion matrix to determine the originality of text is known as confusion matrix analysis. Originality.AI's assessment provides numerical insights into how well its detector performs. The detector demonstrates exceptional accuracy in discerning between AI-generated and human-written content.

Of the total essays analyzed, the Originality.AI detector accurately identified 1,526 as human-written and incorrectly labeled only 81 as AI-generated. This shows a True Negative Value of 94.96% and a False Positive Value of 5.04%.

Originality.ai confusion matrix analysis

This result of a 5.04% false positive rate is significantly lower than Stanford’s average false positive rate of 61.3%. From this we can determine that:

  1. The Stanford study was not performed accurately due to a flawed dataset and poorly executed approach
  2. AI checkers - specifically Originality.AI - do not exhibit bias towards non-native English authors
  3. Originality.AI is significantly more accurate (94.96%) at identifying non-native English human-written content than originally stated by the Stanford study 

Overall, Originality.AI's approach addresses the flaws identified in the Stanford study, offering a more convincing and explanatory assessment of bias in AI checkers. 

By utilizing a larger dataset and addressing the other flaws of the Stanford study, Originality.AI provides more robust evidence and reliable results, effectively invalidating the findings of the Stanford scholars' article.

To read more about Originality.AI’s accuracy ratings, you can check out our in-depth Detection Score Accuracy article here.

Other Studies on AI Detector Bias

There have been additional studies exploring if there is bias against non-native english speakers.

Specifically: https://www.sciencedirect.com/science/article/abs/pii/S0360131524000848?

Detecting ChatGPT-generated essays in a large-scale writing assessment: Is there a bias against non-native English speakers?

Their findings were similar to our own showing no bias in AI detectors against non native english speakers.

"Results showed that our carefully constructed detectors not only achieved near-perfect detection accuracy, but also showed no evidence of bias disadvantaging non-native English speakers."

Marketing Use Vs Academic Use 

It’s important to note the differences in false positives in terms of marketing versus academic use. We have repeatedly emphasized that Originality.AI is not for academic use. 

The Data that our AI has been trained on is closely tied to online content for the purpose of ranking on search engines- NOT academic papers. On our signup page it specifically says that it is built for publishers, agencies and writers, not for students….If we deploy an academic focused solution we will train our AI detection on more TOEFL essays to avoid this problem. But for now our stance is that we are not for academic use and are built specifically for serious content marketers and SEOs.

Originality.ai Landing Page

So What About Non-Native English Speaker Texts Being Flagged as False Positives? 

As one of the most popular AI writing detection services, we are continually developing our services to have the lowest false positive detection rate of any AI content detector. With our most recent update, we now have this number at less than 2.5%. See the study and comparison with other tools along with our latest GPT-4-trained detection model by clicking here.

When it comes to non-native English speaker texts being erroneously flagged as false positives, we’re looking deeply into the causes and correlations to find the underlying causes. One of the theses we are investigating is what we call “cyborg writing”.  

Cyborg writing happens when a writer uses too many writing assistant tools (many of which are powered by AI). For example, if a writer uses autocorrect while relying extensively on a grammar tool and runs their content through an outlining or content optimization tool, all of these tools leverage AI to some extent and that could be the underlying reason for these false positives.

Even if the student or content creator writes the words themselves, submitting it through different degrees of filtering and assistance can leave tell-tale AI “tracks” that systems trained to detect these tracks (like Originality.AI) will pick up.

But is this more of an issue among non-native English speakers? Or is it simply a more nuanced question of “What level of computer-aided assistance is allowed before something no longer becomes a writer’s original work?”  We believe there is no easy, one-size-fits-all solution to this and that the answer will depend on the situation. 

There Is No Perfect Solution (And There Never Will Be)

For this reason, even over the long-term, AI checkers will never be able to provide a perfectly clean 100% solid track record with zero false positives. However, being able to combine AI detection with the ability to visualize the content creation process is one of the main reasons why we built our free Google Chrome AI Detection Extension. This allows someone to see the creation of a Google Doc in order to prove that the writer did indeed create the content, rather than copying and pasting it from ChatGPT or another AI writing service.  

With all of these points in mind, and considering that version 2.0 of Originality.AI accurately predicted AI-written (and human-written) content at nearly 100% across all data sets, we would like to invite the authors of this paper to rerun their data on our updated version and see the results firsthand for themselves.

Jonathan Gillham

Founder / CEO of Originality.ai I have been involved in the SEO and Content Marketing world for over a decade. My career started with a portfolio of content sites, recently I sold 2 content marketing agencies and I am the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences I understand what web publishers need when it comes to verifying content is original. I am not For or Against AI content, I think it has a place in everyones content strategy. However, I believe you as the publisher should be the one making the decision on when to use AI content. Our Originality checking tool has been built with serious web publishers in mind!

Frequently Asked Questions

Do I have to fill out the entire form?

No, that’s one of the benefits, only fill out the areas which you think will be relevant to the prompts you require.

Why is the English so poor for some prompts?

When making the tool we had to make each prompt as general as possible to be able to include every kind of input. Not to worry though ChatGPT is smart and will still understand the prompt.

In The Press

Originality.ai has been featured for its accurate ability to detect GPT-3, Chat GPT and GPT-4 generated content. See some of the coverage below…

View All Press
Featured by Leading Publications

Originality.ai did a fantastic job on all three prompts, precisely detecting them as AI-written. Additionally, after I checked with actual human-written textual content, it did determine it as 100% human-generated, which is important.

Vahan Petrosyan

searchenginejournal.com

I use this tool most frequently to check for AI content personally. My most frequent use-case is checking content submitted by freelance writers we work with for AI and plagiarism.

Tom Demers

searchengineland.com

After extensive research and testing, we determined Originality.ai to be the most accurate technology.

Rock Content Team

rockcontent.com

Jon Gillham, Founder of Originality.ai came up with a tool to detect whether the content is written by humans or AI tools. It’s built on such technology that can specifically detect content by ChatGPT-3 — by giving you a spam score of 0-100, with an accuracy of 94%.

Felix Rose-Collins

ranktracker.com

ChatGPT lacks empathy and originality. It’s also recognized as AI-generated content most of the time by plagiarism and AI detectors like Originality.ai

Ashley Stahl

forbes.com

Originality.ai Do give them a shot! 

Sri Krishna

venturebeat.com

For web publishers, Originality.ai will enable you to scan your content seamlessly, see who has checked it previously, and detect if an AI-powered tool was implored.

Industry Trends

analyticsinsight.net

Frequently Asked Questions

Why is it important to check for plagiarism?

Tools for conducting a plagiarism check between two documents online are important as it helps to ensure the originality and authenticity of written work. Plagiarism undermines the value of professional and educational institutions, as well as the integrity of the authors who write articles. By checking for plagiarism, you can ensure the work that you produce is original or properly attributed to the original author. This helps prevent the distribution of copied and misrepresented information.

What is Text Comparison?

Text comparison is the process of taking two or more pieces of text and comparing them to see if there are any similarities, differences and/or plagiarism. The objective of a text comparison is to see if one of the texts has been copied or paraphrased from another text. This text compare tool for plagiarism check between two documents has been built to help you streamline that process by finding the discrepancies with ease.

How do Text Comparison Tools Work?

Text comparison tools work by analyzing and comparing the contents of two or more text documents to find similarities and differences between them. This is typically done by breaking the texts down into smaller units such as sentences or phrases, and then calculating a similarity score based on the number of identical or nearly identical units. The comparison may be based on the exact wording of the text, or it may take into account synonyms and other variations in language. The results of the comparison are usually presented in the form of a report or visual representation, highlighting the similarities and differences between the texts.

String comparison is a fundamental operation in text comparison tools that involves comparing two sequences of characters to determine if they are identical or not. This comparison can be done at the character level or at a higher level, such as the word or sentence level.

The most basic form of string comparison is the equality test, where the two strings are compared character by character and a Boolean result indicating whether they are equal or not is returned. More sophisticated string comparison algorithms use heuristics and statistical models to determine the similarity between two strings, even if they are not exactly the same. These algorithms often use techniques such as edit distance, which measures the minimum number of operations (such as insertions, deletions, and substitutions) required to transform one string into another.

Another common technique for string comparison is n-gram analysis, where the strings are divided into overlapping sequences of characters (n-grams) and the frequency of each n-gram is compared between the two strings. This allows for a more nuanced comparison that takes into account partial similarities, rather than just exact matches.

String comparison is a crucial component of text comparison tools, as it forms the basis for determining the similarities and differences between texts. The results of the string comparison can then be used to generate a report or visual representation of the similarities and differences between the texts.

What is Syntax Highlighting?

Syntax highlighting is a feature of text editors and integrated development environments (IDEs) that helps to visually distinguish different elements of a code or markup language. It does this by coloring different elements of the code, such as keywords, variables, functions, and operators, based on a predefined set of rules.

The purpose of syntax highlighting is to make the code easier to read and understand, by drawing attention to the different elements and their structure. For example, keywords may be colored in a different hue to emphasize their importance, while comments or strings may be colored differently to distinguish them from the code itself. This helps to make the code more readable, reducing the cognitive load of the reader and making it easier to identify potential syntax errors.

How Can I Conduct a Plagiarism Check between Two Documents Online?

With our tool it’s easy, just enter or upload some text, click on the button “Compare text” and the tool will automatically display the diff between the two texts.

What Are the Benefits of Using a Text Compare Tool?

Using text comparison tools is much easier, more efficient, and more reliable than proofreading a piece of text by hand. Eliminate the risk of human error by using a tool to detect and display the text difference within seconds.

What Files Can You Inspect with This Text Compare Tool?

We have support for the file extensions .pdf, .docx, .odt, .doc and .txt. You can also enter your text or copy and paste text to compare.

Will My Data Be Shared?

There is never any data saved by the tool, when you hit “Upload” we are just scanning the text and pasting it into our text area so with our text compare tool, no data ever enters our servers.

Software License Agreement

Copyright © 2023, Originality.ai

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  1. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Will My Data Be Shared?

This table below shows a heat map of features on other sites compared to ours as you can see we almost have greens across the board!

More From The Blog

Al Content Detector & Plagiarism Checker for Marketers and Writers

Use our leading tools to ensure you can hit publish with integrity!