Last month, Google, ChatGPT-maker OpenAI, Meta and other generative Artificial Intelligence (AI) developer companies pledged to bring new watermark tech so users will be able to differentiate any AI-generated images.
Lately, there have been thousands of cases of cyber criminals using fake photos and videos not just for character assassination of high-profile people and political opponents but also for the sextortion of naive people online.
With watermark tools, it can help people and authorised security agencies in controlling misinformation.
Now, Google's DeepMind division has come up with the watermark tool SynthID. The company is currently beta-testing it with select Vertex AI customers who use Imagen, a text-to-image tool to create photorealistic images.
With SynthID, those AI-generated images get a watermark and ensure anybody who comes across the artificially generated images, is able to differentiate them.
"While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally. Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation. We’re committed to connecting people with high-quality information, and upholding trust between creators and users across society," said the Google DeepMind team.
The new SynthID comes with several nuanced technologies to ensure the artificial images are aesthetically good and not marred with any noticeable marks that compromise the visual beauty.
Also, the SynthID tool will ensure that even if the AI-generated is edited with more layers of filters using other applications, will still not be able to erase or hide the digital watermark.
Get the latest news on new launches, gadget reviews, apps, cybersecurity, and more on personal technology only on DH Tech.