Meta released Llama 3.1 alongside its prized 405B model, achieving state-of-the-art performance across key benchmarks and becoming the first-ever open-sourced frontier model, marking a major milestone in open-source AI development.
It’s the first time an open-source AI model matches or outperforms top closed AI models like OpenAI’s GPT-4o. By offering a private, customizable alternative to closed AI systems, Meta is enabling anyone to create their own tailored AI and with this, it’s more necessary than ever to understand the accuracy of AI detectors.
This brief study looks at 1000 Llama 3.1 generated text results to find out whether the Originality.ai AI Detector can detect Llama 3.1.
Try the Originality.ai AI Detector. Then, learn about AI content detection accuracy and Originality’s exceptional performance in a meta-analysis of third-party studies.
Note: Standard is now retired. Get the latest details about model updates in our guide on which AI detector model is best for you!
To evaluate the detectability of Llama 3.1, we prepared a dataset of 1000 Llama 3.1-generated text samples.
For AI-text generation, we used Llama 3.1 based on three approaches:
To evaluate the efficacy, we used the Open Source AI Detection Efficacy tool that we released:
Originality.ai has three models, 3.0.0 Turbo, 2.0.1 Standard, and 1.0.0 Lite, for AI text detection.
For additional information on each of these models, check out our AI detector and read our AI detection accuracy guide.
The open-source testing tool returns a variety of metrics for each detector you test, each of which reports on a different aspect of that detector’s performance, including:
For a detailed discussion of these metrics, what they mean, how they're calculated, and why we chose them, check out our blog post on AI detector evaluation. For a succinct snapshot, the confusion matrix is an excellent representation of a model's performance.
Below is an evaluation of all these models on the above dataset.
For this smaller test to identify the ability of Originality.ai’s AI detector to detect Llama 3.1 content, we reviewed the True Positive Rate or the percentage (%) of time that the model correctly identified AI text as AI out of 1000 samples of Llama 3.1 content.
1.0.0 Lite:
2.0.1 Standard:
3.0.0 Turbo:
Overall, Originality.ai continues to demonstrate an outstanding capability to identify AI-generated content, including the latest releases of AI models such as OpenAI’s GPT-4o, Claude 3.5, Gemini 1.5 Pro, and GPT-4o-mini.
Each of Originality.ai’s AI detection models detected Llama 3.1 with exceptional accuracy from 3.0.0 Turbo with 99.6% accuracy to 2.0.1 Standard with 98.8% accuracy, and 1.0.0 Lite with 99.1% accuracy.
Have you seen a thought leadership LinkedIn post and wondered if it was AI-generated or human-written? In this study, we looked at the impact of ChatGPT and generative AI tools on the volume of AI content that is being published on LinkedIn. These are our findings.
We believe that it is crucial for AI content detectors reported accuracy to be open, transparent, and accountable. The reality is, each person seeking AI-detection services deserves to know which detector is the most accurate for their specific use case.