You might not realize it, but Google is quietly running an experiment. Look closely at the headlines in your Google Discover feed – the stream of articles you see when you unlock your phone or open a new browser tab. For some users, those headlines aren't written by journalists anymore; they're crafted by artificial intelligence.
The change was first noticed when writer Sean Hollister found AI-generated headlines appearing in his feed. Instead of reflecting the nuance of the original articles, these AI versions are often vague or, more concerningly, simply incorrect. One story about Microsoft’s AI initiatives was reduced to “Microsoft developers using AI,” offering little real information.
But the inaccuracies go deeper. A headline about Valve’s upcoming Steam Machine falsely proclaimed a “Steam Machine price revealed,” when the article actually discussed the *lack* of a set price. Another suggested that using a new charger could *harm* your Pixel phone, twisting the original article’s explanation of compatibility issues.
The potential for misinformation is alarming. Imagine these errors applied to critical news stories, not just tech gadgets. Previous attempts at AI-powered news summarization have shown similar pitfalls, and this experiment feels like a step in the same direction.
Even more troubling, the AI seems prone to misrepresenting tone, potentially leading to libel. A playful article about a quirky feature in Baldur’s Gate 3 – the ability to “recruit” child NPCs – was transformed into the inflammatory “BG3 players exploit children.” The difference is stark, and the implications are serious.
Google acknowledges the experiment, calling it a “small UI test” for a limited number of Discover users. They claim it’s designed to make “topic details easier to digest,” but it effectively places AI-generated headlines at the forefront, where readers expect to find the author’s original intent.
So how can you tell if a headline has been touched by AI? There are a few telltale signs. The AI consistently favors brevity, producing headlines of four words or less. It also rarely capitalizes words beyond the first, a departure from standard journalistic style.
Look for the “See more” button beneath the preview. If it’s there, and a tag reads “Generated with AI, which can make mistakes,” you know the headline isn’t what the author intended. Articles with genuine headlines won’t have this button at all.
Unfortunately, there’s currently no way to opt out of this experiment. Google hasn’t offered a solution, simply reiterating its limited scope. This leaves many users, and journalists, vulnerable to misleading information and potential misattribution.
For those who rely on Google Discover as a news source, it means a new level of scrutiny is required. Every headline must be questioned, every preview examined. It’s a frustrating shift, and one that undermines the trust built between readers and the publications they rely on.
Ultimately, this experiment raises a fundamental question: at what cost do we pursue convenience? While AI may offer efficiency, the potential for misinformation and the erosion of journalistic integrity are risks we must carefully consider. The most reliable path remains the same: read the article itself, and form your own informed opinion.