Open your phone, swipe right, and a headline greets you. Only lately that line might not be written by the outlet whose image and link you see — it's a short, punchy sentence produced by Google's AI.
Google has quietly shifted the behavior in its Discover feed: AI-generated headlines and summaries, once billed as an experiment, are now a feature. The company tells outlets the change helps people “explore topics” by clustering coverage and producing an overview that, Google says, “performs well for user satisfaction.” For many readers that means distilled context. For many publishers it means losing control of the first thing a reader sees.
How Discover's AI headlines work
Discover groups related stories into what Google calls “trending topics.” Instead of picking one publisher’s headline, the system synthesizes across sources and stitches together its own headline and a short summary. The card often shows several publisher icons (e.g., “Outlet A +11”) and borrows a large image from the top-listed outlet. Tap the headline and you land on an AI summary; tap the image and you usually go to a single publisher’s article.
That UX choice — making the card look like a normal news item while substituting a machine-written headline — has been the core of the problem. When the AI gets details wrong, readers often assume the publisher behind the image wrote the misleading line.
When synthesis becomes misinformation
Critics have circulated many examples: headlines that flip nuance into certainty, conflate separate stories, or graft details from one article onto another. One outlet found its piece about a slow charger baptized by an AI headline as “Qi2 slows older Pixels,” which was simply false. Another story about a canceled order became a broad warning about “Amazon scams.” In a few cases Google’s AI even produced headlines that contradicted the linked article.
Those mistakes matter. Discover has been a major referral source for publishers, and misleading headlines can damage trust, depress click-through quality (readers bounce when expectations aren’t met), and reassign credit for reporting. Research and publisher reports also suggest the feed is giving more space to YouTube and posts from X, with some tests showing AI summaries that mainly nudge users toward playing a video rather than reading newsroom stories.
Why publishers are alarmed — beyond individual errors
This isn't just about sloppy copy. Newsrooms care about control: headlines are deliberately crafted to balance accuracy, context and tone. When a platform replaces that editorial framing with a brief machine summary, it erodes brand signals and can shift attention (and ad dollars) toward Google-owned properties like YouTube. One analytics firm found that in certain markets AI summaries on Discover often drove clicks to YouTube instead of the original reporting — a structural shift, not a marginal UI tweak.
Publishers also warn of worse outcomes on sensitive beats. Small phrasing errors in coverage of policy, health or finance can be consequential. And when audiences increasingly encounter news via feeds, the platform’s presentation layer carries outsized responsibility for the public’s understanding.
Spotting an AI-titled card (and avoiding the bait-and-switch)
A few quick checks help when a headline smells off:
- Look for multiple outlet icons at the top of the card (e.g., "Outlet +X") rather than a single publisher credit.
- Check whether the card shows a Follow button; trending-topic cards typically do not.
- Tap the outlet list to see original headlines from individual publishers before trusting the AI’s summary.
If the card’s headline and the article don’t match, it’s likely the AI — not the newsroom — that wrote the hook.
What Google could do (without breaking the feature)
There are practical fixes that would preserve the utility of clustering coverage while respecting sources and accuracy: clearer labeling ("AI-generated headline"), side-by-side display of the top publisher’s original headline, an opt-out for publishers who don't want machine headlines applied to their pieces, and stronger provenance signals (author, outlet, publish time). Those steps would reduce misattribution and let readers judge whether to trust the synthesized line or the original reporting.
Google is already folding AI deeper into its ecosystem — from experimental summary features to a broader push for conversational and agentic tools across apps — and that context matters. The same models powering Discover summaries are related to moves like Gemini Deep Research and the company’s newer in-product AI controls such as the AI Mode button in Chrome. Platforms are reshaping how people discover and consume news; transparency and guardrails will determine whether those changes help or harm trusted journalism.
The change in Discover is small on a device but large in consequence: a few words can redirect traffic, shift responsibility, and reshape perception. If Google insists on machine-crafted hooks, publishers and readers will keep pushing for clearer labels, better attribution and a path that respects both the craft of headlines and the promise of useful AI.