AI Studies

Can Claude 3.5 Be Detected by AI-Content Detectors?

Anthropic launched its latest AI model, Claude 3.5 Sonnet. Review a brief study of Originality.ai’s accuracy in detecting Claude 3.5 Sonnet AI-generated content.

Anthropic has launched its latest AI model Claude 3.5 Sonnet — the company’s first release in the upcoming Claude 3.5 AI model series. Anthropic has claimed that its latest offering outperforms its peers, such as OpenAI’s GPT-4o, Google’s Gemini-1.5 Pro, Meta’s Llama-400b, and even the company’s proprietary models — Claude 3 Haiku and Claude 3 Opus. 

So, we put the Originality.ai AI detector to the test to determine its accuracy in detecting AI-generated text created by Claude 3.5 Sonnet. 

To establish the AI Checker’s accuracy, this brief study generated 1000 Claude 3.5 Sonnet text results and then ran them through the Originality.ai AI checker. 

Is Claude 3.5 Sonnet AI Content Detectable?

Yes — Claude 3.5 Sonnet text is detectable with 98.5% accuracy for the Originality.ai Model 2.0 Standard and 99.3% accuracy for our 3.0 Turbo model.

Try our AI Detector here.

Dataset

To evaluate the detectability of Claude 3.5 Sonnet, we prepared a dataset of 1000 Claude 3.5 Sonnet generated text samples.

AI-Generated Text Data

For AI-text generation, we used Claude 3.5 Sonnet based on three approaches given below:

  1. Rewrite prompts: We generated the content by providing the model with a customized prompt along with some articles (probably generated by LLMs) as a reference to rewrite the prompts. (450 Samples)

  2. Rewrite human-written text: For the second method, we generated the content by attempting to use the provided prompt to bypass the AI Detection tool. To accomplish this, we asked Claude 3.5 Sonnet to rewrite the human-written text, which we fetched from an open-source dataset (325 Samples)some text
    1. One-Class Learning for AI-Generated Essay Detectionsome text
      1. Paper: https://www.mdpi.com/2076-3417/13/13/7901
      2. Dataset: https://github.com/rcorizzo/one-class-essay-detection

  3. Write articles from scratch: Finally, for the third approach, we generated the articles from scratch based on a set of topics (fiction and non-fiction) such as history, medicine, mental health, content marketing, social media, literature, robots, the future, etc. (225 Samples).

Evaluation

To evaluate the efficacy, we used our Open Source AI Detection Efficacy tool:

Originality.ai has two models — Model 3.0 Turbo and Model 2.0 Standard for the purpose of AI text detection.

  • Version 3.0 Turbo — If your risk tolerance for AI is ZERO! It is designed to identify any use of AI, even light AI.
  • Version 2.0 Standard — If you are okay with slight use of AI (i.e., AI editing).

The open-source testing tool returns a variety of metrics for each detector tested, each of which reports on a different aspect of that detector’s performance, including:

  • Sensitivity (True Positive Rate): The percentage of the time the detector identifies AI text correctly.
  • Specificity (True Negative Rate): The percentage of the time the detector identifies human-written text correctly.
    Accuracy: The percentage of the detector’s predictions that were correct.
  • F1: The harmonic mean of Specificity and Precision, often used as an agglomerating metric when ranking the performance of multiple detectors (a performance measurement that combines recall and precision to evaluate models).

If you'd like a detailed discussion of these metrics, what they mean, how they're calculated, and why we chose them, check out our blog post on AI detector evaluation. For a succinct snapshot,  the confusion matrix is an excellent representation of a model's performance.

Below is an evaluation of both the models on the above dataset. 

Confusion Matrix

Figure 1. Confusion Matrix on AI-only dataset with Model 2.0 Standard
Figure 2. Confusion Matrix on AI-only dataset with Model 3.0 Turbo

Evaluation Metrics

For this small test to reflect the Originality.ai detector’s ability to identify Claude 3.5 Sonnet content, we looked at the True Positive Rate or the percentage of the time the model correctly identified AI text as AI out of a 1000 sample Claude 3.5 Sonnet content. 

Model 2.0 Standard:

  • Recall (True Positive Rate) = 98.5%

Model 3.0 Turbo:

  • Recall (True Positive Rate) = 99.3%

Conclusion

Our study confirms that the content generated by Claude 3.5 AI-generated text is highly detectable with our AI detector. The Model 2.0 Standard scored an impressive 98.5% accuracy, while the Model 3.0 Turbo excelled with a 99.3% accuracy. These results state the effectiveness of our AI detectors in identifying AI-generated content, ensuring reliable detection across various text generation approaches.

Jonathan Gillham

Founder / CEO of Originality.ai I have been involved in the SEO and Content Marketing world for over a decade. My career started with a portfolio of content sites, recently I sold 2 content marketing agencies and I am the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences I understand what web publishers need when it comes to verifying content is original. I am not For or Against AI content, I think it has a place in everyones content strategy. However, I believe you as the publisher should be the one making the decision on when to use AI content. Our Originality checking tool has been built with serious web publishers in mind!

More From The Blog

AI Content Detector & Plagiarism Checker for Serious Content Publishers

Improve your content quality by accurately detecting duplicate content and artificially generated text.