As AI tech gets smarter it’s getting harder to spot the difference between content made by a human and what’s been dreamed up by an algorithm. Google, pushing the AI envelope itself, is aware of this and wants to help.
The move gives the entire AI industry an easy, seemingly robust way to silently mark content as artificially generated, which could be useful for detecting deepfakes and other damaging AI content before it goes out in the wild.
To make AI safer for all, Google wants to examine your security posture—introducing Google’s new AI Security Risk Assessment Tool: SAIF.
Britain’s competition watchdog is opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.
Google's new "Customize" feature in NotebookLM Audio Overviews lets you create realistic podcasts from any content with the focus you want. Here's how to do it.
DeepMind's creative lead Lorrain enhances media with AI, working on projects with Marvel, Netflix, and teaching AI filmmaking at Columbia University.
Google’s SynthID text watermarking technology, a tool the company created to make AI-generated text easier to identify, is now available open-source through the Google Responsible Generative AI Toolkit, the company announced on X.
Google’s partnership with AI firm Anthropic is at risk of being derailed in the UK after the competition watchdog called for further investigation of the pact’s potential impact.
Gradient Partner Eylul Kayin; Cascade AI CTO Pulak Goyal; and Cascade AI CEO Ana-Maria Constantin. (Cascade Photo) Trying to find answers to
Developers can now use Google’s SynthID Text technology to determine whether text was made by their own AI models.