SEARCH
SHARE IT
Google's SynthID text watermarking technology, designed to identify AI-generated text, is now open-source through the Google Responsible Generative AI Toolkit, as announced on X.
"Now, other [generative] AI developers will be able to use this technology to help them detect whether text outputs have come from their own [large language models], making it easier for more developers to build AI responsibly," Pushmeet Kohli, Google DeepMind's vice president of research, told MIT Technology Review.
Watermarks play a crucial role in combating political misinformation, nonconsensual sexual content, and other negative uses of massive language models. California is considering making AI watermarking mandatory, while China's government began requiring it last year. However, the tools are still a work in progress.
SynthID, which was unveiled in August, helps recognise AI-produced output by embedding an invisible watermark in images, audio, video, and text as they are made. According to Google, the text version of SynthID works by making the text output slightly less probable in a way that software can detect but people cannot.
An LLM generates text one token at a time. These tokens can represent a single character, word or part of a phrase. To create a sequence of coherent text, the model predicts the next most likely token to generate. These predictions are based on the preceding words and the probability scores assigned to each potential token.
For example, with the phrase “My favorite tropical fruits are __.” The LLM might start completing the sentence with the tokens “mango,” “lychee,” “papaya,” or “durian,” and each token is given a probability score. When there’s a range of different tokens to choose from, SynthID can adjust the probability score of each predicted token, in cases where it won’t compromise the quality, accuracy and creativity of the output.
This process is repeated throughout the generated text, so a single sentence might contain ten or more adjusted probability scores, and a page could contain hundreds. The final pattern of scores for both the model’s word choices combined with the adjusted probability scores are considered the watermark.
Google's Gemini chatbot integrates a watermarking mechanism that prioritises text quality, accuracy, inventiveness, and speed, addressing a long-standing issue. Google claims it can function with content as brief as three sentences, as well as text that has been clipped, paraphrased, or changed. However, it struggles with brief text, altered or translated content, and even factual queries.
"SynthID isn't a silver bullet for identifying AI-generated content," Google stated in a May blog post. "[But it] is an important building block for developing more reliable AI identification tools and can help millions of people make informed decisions about how they interact with AI-generated content."
MORE NEWS FOR YOU