It’s long been the consensus that Google has been keeping an active check on the use of AI for content generation.
After all, we’ve seen a direct correlation between sites that have relied heavily on AI content creation and a significant drop in rankings and traffic in our study (learn about Google’s AI penalties).
But now, Google has essentially confirmed this theory — and demonstrated that it does check for AI content generation — by making its AI watermark open source.
But what is Google’s SynthID AI watermark, and what does it mean for content creators and consumers?
Google DeepMind’s SynthID, which identifies AI-generated content has officially been released. It’s accessible through Hugging Face as noted by their October 2024 announcement.
The watermark itself gets added directly into the text when it’s been generated by an AI model, clarifying whether or not content has been AI-generated.
However, the watermark is not actually a visible symbol. It’s instead an invisible watermark that is added to the text.
Google’s team believe this will help them detect AI content easier in the future, by comparing the expected probability scores for content that is:
If successful, this approach could go a long way in helping provide more online transparency and accountability, two concepts that are crucial as AI continues to advance.
However, the tool in its current instance is not infallible. Google notes that there are limitations such as when content is translated from one language to another, or AI-generated content is “thoroughly rewritten.”
With all that in mind, let’s take a look at how this significant change will impact publishers and consumers, and their approach to generative AI content.
For publishers, it can mean several different things.
For publishers that produce high-quality, human-written content, this watermark can be seen as a positive step. It will help them continue to have an authentic and transparent relationship with their readers, producing content that users can see is not AI-generated.
Publishers can also use the Originality.ai AI content detector to check the writing their team is producing and review potential instances of AI content.
For those relying heavily on passing off AI-generated content as their own, this will now be more difficult with invisible watermarking.
Read our study to learn more about Google’s penalties for AI content that doesn’t comply with spam policies.
From the consumers' point of view, this approach could help provide some much-needed transparency online.
It is the first step towards a return to a more authentic internet experience, ensuring consumers are able to understand the sources behind the content they are consuming and help prevent potential malicious misinformation from gaining traction.
At Originality.ai, we prioritize transparency through our editorial toolkit including best-in-class AI detection and industry-leading plagiarism detection.
This announcement is another step towards establishing transparency considering the rise of AI content in Google search results.
We have yet to see the true impact that this watermark may have, but combining it with a robust, industry-leading AI detector continues to push the conversation in the right direction.
SynthID is a watermarking tool created by Google DeepMind. It embeds an invisible watermark in AI-generated text, which lets consumers and publishers see what is AI-generated and what isn’t.
Google has recently open-sourced SynthID, allowing any developers and businesses to access it via Hugging Face or Google’s Responsible GenAI Toolkit.
As reported by MIT Technology Review, adding the SynthID watermark does not impact the quality, accuracy, or speed of text generation. The study noted used feedback from 20 million responses from chatbots (watermarked and unwatermarked).