Unlock the full potential of your content creation process with our guide on interpreting Originality.AI’s AI vs Human score. Whether you’re a writer, editor, marketing agency, or website publisher, this article will help you understand and align on what fits for your content marketing strategy.
I will try and answer all of these questions the best I can below…
We are working to set the modern, trusted standard for determining the originality of a piece of content. We want to be your go-to 3rd party helping ensure writers, editors, marketing agencies, and web publishers all can trust in how content has been created. Was it copied, was it spun by a paraphrasing tool, did it get created by an AI tool like ChatGPT, or did a human write it? We are here to help!
Content created by AI has a place both now and into the future but I believe most of us do not want to pay the same rate for content that was generated in 5 minutes by AI compared to a thought leadership piece created by a true expert.
We want to ensure the content creation process is transparent and fair for everyone involved in the process.
Checking for plagiarism is widely understood, it has been done online for decades. It is a very clear process where if a section of text is copied from somewhere else then it is plagiarized. However, checking and interpreting the results of AI detection is a much more nuanced activity. The results are not as “binary” as plagiarism detection meaning there is more room for interpretation.
This document is meant to help get everyone to use and understand the AI detection scores correctly.
Across an exhaustive test of 10,000 pieces of GPT-3 generated content along with a human-generated control group Originality.AI was over 94% accurate. For ChatGPT and GPT-3.5 you can see the results across 20 generated articles:
https://originality.ai/can-gpt-3-5-chatgpt-be-detected/
The tool is 94%+ accurate with a few false positives and negatives which is excellent… however it is not at an “enforceable” level since there will be incorrectly identified articles.
So… given that it is very accurate but not perfect how should we interpret the results?
Let’s look at how to interpret results.
It is important to remember 2 things at this stage…
So given these 2 points the best way to review the results is to look at a series of results from a single source (writer or agency).
Here is how to think about the scores from a group of writers.
It is important to remember, as smart as the development team that built Originality.AI’s AI detection tool, we are not Google.
I built Originality because I wanted publishers to be able to make intelligent risk-based decisions when it comes to publishing potentially AI-generated content.
So based on your risk tolerance, the writing team and the website it is being published on, your decision on the allowable threshold of AI will be different.
Here are some recommended thresholds to try and help get Writers, Marketing Agencies, and Publishers all on the same page.
We are seeing companies choose between 4 AI strategies and here would be the recommended Human vs AI scores to aim for across a sampling of content…
This is a good point to put a reminder that we are not Google. Our AI detection likely works in a more robust but potentially less efficient way than anything Google could roll out at scale. Our teams’ belief is that if our tool is unable to tell if the content is AI then likely Google and their potentially smarter (but by necessity more efficient) approach would get a similar result.
Our AI takes a more holistic and far more intensive approach to analyze an article to determine if it was AI generated than anything else we have seen.
Here is an in-depth article looking at how our tool works. It is our own AI that was trained on an incredible amount of GPT-3, and GPT-3.5 content and is able to accurately identify patterns in AI content across an entire article – https://originality.ai/how-does-ai-content-detection-work/
Our approach is a far more intensive/accurate way to look at an article than the free AI detection tools that rely on either…
We appreciate the valid question of “what part is AI” this is going to be answered with a highlighting solution that identifies blocks of text based on how likely that block of text was generated by AI.
This will be live shortly.
I understand the desire to get a yes or no answer to the question of whether or not a piece of content was created by AI but like all things, in SEO the answer is unfortunately nuanced.
The alternative to ignoring the presence of AI is inviting its use on your site. Depending on your editing process that is not necessarily a bad decision.
OpenAI has re-launched ChatGPT Browse with Bing. This study looks at what websites can ChatGPT browse and which ones it is unable to browse. Not just what websites are blocking Browse with Bing but exploring what websites can you actually have ChatGPT browse and provide useful information from.
No one can doubt the fact that AI has opened up new frontiers in content creation – everything from text to images, to audio and videos and much more. And while AI offers unprecedented opportunities to automate tasks and give voice (or art) to our creativity, there’s also growing concern about the societal costs of AI that’s undetectable.
We’ve all heard about the onslaught of AI-written content on things like student essays and blog articles. But what about more complex writing, like technical documents, or creative writing like poetry? With the breakneck pace of AI developments, writers, authors and researchers alike have seen both the beneficial and harmful sides of this new technology. Here are some of the many impacts that AI writers have left on these fields, as well as a look at what may be to come.