AI-generated content is everywhere, thanks to the growing sophistication of generative models and large language models (LLM). You will now find many online platforms that claim to generate entire articles ten times faster than the speed of a human writer.
As the race for scaling your content production increases, so too does the ability of search engines to detect and filter out AI-generated content from search results.
Google’s helpful content update, released in August 2022, is a direct effort to combat the rise of AI-generated text–which, though optimized to garner traffic based on the strategic use of keywords, usually lacks research and depth. Google now penalizes this type of content in favor of material that actually provides value to readers.
So, as a website owner or manager, you will need tools in your arsenal that can confidently tell you if your content has originality, a people-first approach, and is not AI-generated before you put it on your website.
Enter ContentAtScale (now known as BrandWell). This platform not only lets you scale your content production, but it also helps scan for AI-generated content. Today we’ll narrow down our focus on the platform’s performance with AI content detection. We will run some real-life test cases and compare ContentAtScale with other platforms.
ContentAtScale has quite a few features that make it a powerful AI detection tool. Here are its top selling points:
It’s time to put ContentAtScale to the test. To make our test as fair as possible, we’ve used seven text samples, all of which were fully generated by Jasper.ai, a popular AI writing tool. We have scanned all those samples first with ContentAtScale and then with Originality.ai.
ContentAtScale and Originality.ai represent results in a similar way. Both offer a probability score that tells you the likelihood that AI-generated text is present in your sample.
An AI content detection score of 100% indicates that your text is definitely AI-generated. Similarly, a score of 80% means that the probability of your text being AI-generated is 80%. We’ve compiled the test results below in a convenient table.
As we already know that Jasper.ai generated our content, the optimal result for all samples is 100%. Looking at the scores, you’ll see that Originality.ai has detected it with high accuracy for five out of seven samples. On the other hand, ContentAtScale’s AI detection tool has detected the presence of AI content in three out of seven samples.
To make comparison easier, let’s look at the average score across all samples. Orginality.ai’s average detection score is 79.14%, compared to ContentAtScale’s average of 46%. ContentAtScale’s average suggests that our samples are more likely human-generated than AI-generated–which, of course, is incorrect.
The only exception in our result is sample # 6. Originality.ai strongly indicated that this sample was human-generated, giving it a score of 2%, while ContentAtScale was 100% certain it was AI-generated.
Now that we’ve compared the accuracy of ContentAtScale versus Originality.ai, let’s review how ContentAtScale measures up overall.
However, with the URL scanning feature available in other tools like Originality.ai, you can scan multiple URLs and domains simultaneously.
Few advantages have made ContentAtScale a competent platform for AI detection. The top benefit of using ContentAtScale is that you can use it for free. The platform lets you scan up to 25,000 characters at a time. This is convenient for small-scale users, including editors and educators.
Besides, the speed and simplicity of the detection process are also praiseworthy. You don’t have to wait for the result to come up.
ContentAtScale is a convenient tool for many users who want to quickly determine whether the content is A-generated. Unfortunately, the detection performance is unreliable as it failed to detect AI text in most of our samples.
The tool may be a good starter for some users. However, if you are looking for a more reliable and consistent AI detection tool, Originality.ai is worth considering.