Skip to main content

Google’s AI detection tool is now available for anyone to try

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Recommended Videos

SynthID debuted in 2023 as a means to watermark AI-generated images, audio, and video. It was initially integrated into Imagen, and the company subsequently announced its incorporation into the Gemini chatbot this past May at I/O 2024.

The system works by encoding tokens — those are the foundational chunks of data (be it a single character, word, or part of a phrase) that a generative AI uses to understand the prompt and predict the next word in its reply — with imperceptible watermarks during the text generation process. It does so, according to a DeepMind blog from May, by “introducing additional information in the token distribution at the point of generation by modulating the likelihood of tokens being generated.”

By comparing the model’s word choices along with its “adjusted probability scores” against the expected pattern of scores for watermarked and unwatermarked text, SynthID can detect whether an AI wrote that sentence.

Here’s how SynthID watermarks AI-generated content across modalities. ↓ pic.twitter.com/CVxgP3bnt2

— Google DeepMind (@GoogleDeepMind) October 23, 2024

This process does not impact the response’s accuracy, quality, or speed, according to a study published in Nature on Wednesday, nor can it be easily bypassed. Unlike standard metadata, which can be easily stripped and erased, SynthID’s watermark reportedly remains even if the content has been cropped, edited, or otherwise modified.

“Achieving reliable and imperceptible watermarking of AI-generated text is fundamentally challenging, especially in scenarios where [large language model] outputs are near deterministic, such as factual questions or code generation tasks,” Soheil Feizi, an associate professor at the University of Maryland, told MIT Technology Review, noting that its open-source nature “allows the community to test these detectors and evaluate their robustness in different settings, helping to better understand the limitations of these techniques.”

The system is not foolproof, however. While it is resistant to tampering, SynthID’s watermarks can be removed if the text is run through a language translation app or if it’s been heavily rewritten. It is also less effective with short passages of text and in determining whether a reply based on a factual statement was generated by AI. For example, there’s only one right answer to the prompt, “what is the capital of France?” and both humans and AI will tell you that it’s Paris.

If you’d like to try SynthID yourself, it can be downloaded from Hugging Face as part of Google’s updated Responsible GenAI Toolkit.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Ex-Google employees say we need ‘an Android-like moment for AI’
Hugo Barra Nexus 7

Hugo Barra, Google’s former VP of Android product management, announced Wednesday that he is leading a new startup with aims to develop an Android-like operating system for AI agents.

"[We're] going back to our Android roots, building a new operating system for people & AI agents," Barra wrote in a post on X.

Read more
OpenAI’s Sora was leaked in protest over allegations of ‘art washing’
An AI image portraying two mammoths that walk through snow, with mountains and a forest in the background.

OpenAI's unreleased Sora video generation model was leaked Tuesday by a group protesting the company's "art washing" actions, per a post from X user @legit_rumors.

The group, calling themselves Sora PR Puppets, reportedly had gained early access to the Sora API. Through that, they leveraged authentication tokens to create a front-end interface enabling anyone to generate video clips with the model. While the project only remained online for around three hours before Hugging Face (or possibly OpenAI itself) revoked access, several users managed to publish their creations to social media sites.

Read more
Perplexity takes aim at Google and Amazon with new shopping tool
An image of someone using Perplexity to shop online.

AI startup Perplexity has launched a new shopping feature that it describes as a “one-stop solution where you can research and purchase products.”

Available initially for U.S.-based users of its paid Pro subscription tier, with plans to expand internationally soon, Perplexity Shopping is a shot across the bows of rival services like Google and Amazon as the company seeks to attract more users to its AI chatbot while building out related services like search and online shopping.

Read more