AI Writing

AI Detector Efficacy Research Tool

Explore the cutting-edge AI Detector Efficacy Research Tool by Originality. Ensure your content detection is powerful, transparent, and accountable. Learn more today.

What is this tool?

At Originality, our priority 1 is building AI content detection that is as powerful and effective as possible. The AI tech space continues to evolve at an incredible pace, and our team keeps up a constant, thorough practice of research into the state of AI content production and detection so that we can keep our tools operating at peak performance. We also believe that it's essential that AI content detectors be transparent and accountable about the efficacy of their tool, so that users can verify for themselves which tool is best suited to their needs. 

To that end, we would like to introduce Orginality's new AI Detector Efficacy Research Tool. This tool adapts some of Orginality's own research techniques for evaluating AI content detectors, and packages them into a simple and accessible user interface for anyone to run on their own machine. Want to see whether a content detector's claims stack up against a particular dataset you'd like to test? This tool makes that easy and straightforward, providing you with helpful graphics to illustrate results, as well as detailed metrics tables to help you to dive deeper.  

Using the tool: step-by-step-guide

Setup

Install Docker

To start using the tool, first install Docker. The tool runs inside of a Docker container, and this will manage much of the setup for you automatically.

Download or Clone the Repo

Next, clone or download the github repo for the tool. To download it as a zip file, simply click the green Code button, and select download ZIP from the dropdown. You can extract the contents of the ZIP file into whichever directory you like, but for the rest of this walkthrough I'll assume that it's extracted into the Documents folder.

Alternatively, you can clone the tool by opening the terminal, navigating to your desired directory, and executing the command git clone https://github.com/OriginalityAI/Ai-detector-research-tool-UI

Download or clone the repo

Launch the Docker Container

Now, open the terminal and navigate to the directory where you cloned or extracted the github repo. If you used ZIP option extracted to the Documents folder, this can be accomplished by executing the following command:

cd ~/Documents/Ai-detector-research-tool-UI-main

From within this directory, execute the following commands.

docker-compose build 
docker-compose up

These will start and configure the Docker environment that manages the tool. After this process is complete, the tool is ready to use by navigating to the URL http://localhost:8080 in your browser.

launch the docker

Running the tool

Upload a dataset

First, upload a dataset to use for your detector evaluations. Datasets must be a CSV with three headers, input, dataset, and label:

  • input will contain the text you would like the detector to evaluate
  • label will denote whether the text was written by a human or AI, and should have the value human-written or ai-written respectively. 
  • dataset provides a grouping for data that was generated by different AI models, e.g. GPT-3 vs. GPT-4

A template for a correctly formatted CSV can be downloaded using the Download Template button. If you don't have a dataset of your own that you'd like to test, you can download our open source default dataset using the Download Default button.

Upload a dataset

Choose your detectors

Next, select which detectors you would like to evaluate. Click the name of a detector to include it in testing, and then enter the necessary authentication information for that detector's API. Every detector requires an API key, and some detectors may require additional authentication information, such as Copyleaks' Scan ID. Originality does not provide keys with this tool, so you will need to obtain your own from each service you would like to evaluate before proceeding.

Choose your detectors

Test your dataset (optional)

We recommend testing your dataset using the Test button beside the CSV upload before committing to a full evaluation. The Test button will run a trial evaluation against a small subset of your dataset, allowing you to confirm that your CSV is properly formatted, your keys are valid, and the detectors you are evaluating have functioning API endpoints. The test will return a folder containing your results. Be sure to check the output.CSV file to confirm that all rows of the test dataset were submitted successfully - if any rows failed, error details will be given in the error_log column of this CSV.

Test your dataset

Start your evaluation

Click Evaluate to submit your full dataset and begin your evaluation. For especially large datasets, this evaluation can take several minutes. Once the evaluation is complete, results for each detector will appear in the Results field below.

Start your evaluation

Check your results

Each detector's results will include a confusion matrix and a score table containing performance on key metrics. Additionally, by clicking the Download button, you can download a ZIP file containing the confusion matrix, score table, and a CSV detailing the result for each datapoint. A ZIP file containing all detector results can be obtained using the Download All button in the top right of the results field.

Check your results

Interpreting Your Results

The tool returns a variety of metrics for each detector you test, each of which reports on a different aspect of that detectors performance, including:

- Sensitivity (True Positive Rate): The percentage of the time the detector identifies AI correctly.

- Specificity (True Negative Rate): The percentage of the time the detector identifies humans correctly.

- Accuracy: The percentage of the detectors predictions that were correct

- F1: The harmonic mean of Specificity and Precision, often used as an agglomerating metric when ranking performance of multiple detectors.

If you'd like a detailed discussion of these metrics, what they mean, how they're calculated, and why we chose them, check out our blog post on AI detector evaluation. For a succinct upshot, though, we think the confusion matrix is an excellent representation of a model's performance.

Gpt 4 detection accuracy by originality.ai model

A confusion matrix is a table that describes the performance of a detector on a particular set of text samples. The upper left cell of the table shows the true positive rate for that detector, meaning how often it was able to accurately identify AI written text as AI written. The lower right cell of the table shows the true negative rate of the model, meaning how often it was able to accurately identify human written text as human written. For these true rates, a higher percentage is better.

The lower left cell of the table shows the false positive rate, meaning how often that detector mistakenly identified human writing as written by AI. The upper right cell of the table shows the false negative rate, meaning how often that detector mistakenly identified AI writing as written by a human. For these false rates, a lower percentage is better.

Keeping AI Accountable with Originality

We hope this tool makes evaluating AI detectors easier and more accessible to the public, which in turn helps to keep generative AI content creation aligned with the interests and values of the companies and users integrating it into their products to achieve their goals. The tool is free and completely open source, and we welcome any feedback on extensions or improvements. Consider giving it a try, and seeing for yourself how the landscape of AI detectors measure up against the content that matters to you.

Jonathan Gillham

Founder / CEO of Originality.AI I have been involved in the SEO and Content Marketing world for over a decade. My career started with a portfolio of content sites, recently I sold 2 content marketing agencies and I am the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences I understand what web publishers need when it comes to verifying content is original. I am not For or Against AI content, I think it has a place in everyones content strategy. However, I believe you as the publisher should be the one making the decision on when to use AI content. Our Originality checking tool has been built with serious web publishers in mind!

More From The Blog

AI Content Detector & Plagiarism Checker for Serious Content Publishers

Improve your content quality by accurately detecting duplicate content and artificially generated text.