This is a take. We’re going to be wrong about pieces of it. But we’ve now watched 30+ clients run AI-content programs (ours and other agencies’) and the data we have makes us hold the unpopular position: AI-generated SEO content, as the industry currently practices it, is a slow-motion ranking liability.

Here’s why.

The pitch

The pitch for AI content is irresistible: 100x output, fraction of the cost, "human-edited," "high quality." The promise is you scale the long-tail, capture the demand, and you’ve solved content marketing.

For about 6–9 months in 2024, it kind of worked. Sites publishing 200–1000 AI articles a month saw real ranking gains. The gold rush was on.

Then March 2024’s Helpful Content rewrite landed. Then the September 2024 update. Then March 2025. Then March 2026.

What we’ve actually seen

Across the 30+ clients we’ve audited or co-managed:

  • Sites that published >100 AI-drafted pages per month between 2023–2025 are down a median 38% in tracked organic traffic by April 2026.
  • The drop didn’t happen all at once. It compounded across 4–6 algorithm updates.
  • The brands that recovered all did the same thing: aggressive de-indexing of low-quality AI pages, plus a complete re-write of the top 5–10% of pages they wanted to keep, by a real human, with measurable expertise signals (author byline, credentials, testing data).
  • The brands that didn’t recover kept publishing through the dip. Most are now at 20–30% of peak traffic. Some have started over on new domains.

The cleanest version of this we’ve seen: a supplement client we declined to onboard in 2024 because they wanted us to run a 500-article-a-month AI content program. They went with another agency. Two years later: their primary domain is down 65%, they’re trying to rebuild on a new domain, the program never ROI’d.

The "human-edited" lie

Every agency selling AI content programs claims their output is "human-edited." Almost none of it is, in the way that matters.

What "human-edited" usually means in practice:

  • A human passes the AI draft through Grammarly
  • A human checks for factual accuracy on 3–5 claims
  • A human adds a couple of "expert" pull-quotes
  • A human ships it

What "human-edited" needs to mean for the classifier to reward you:

  • A subject-matter expert restructures the piece around an original argument
  • The piece contains first-party data, testing, or photography that didn’t come from the model
  • The piece reflects genuine expertise that’s checkable against the author’s wider profile

The first version produces 80 pieces a week. The second version produces 8. Most agencies sell the first while charging for the second.

What we do instead

We ship 1–3 long-form posts a month per client, all written by humans with subject-matter expertise (or co-written, with the expert as the byline). We pair this with the off-page work that does most of the actual ranking — backlinks, engagement signals, AEO seeding.

The output looks low compared to the AI agencies’ decks. The ranking outcomes don’t.

The bet: 1 piece of expert-led content paired with 30 well-placed external mentions outranks 50 AI drafts every time. We’ve tested it.

If we’re proven wrong in two years, we’ll write a follow-up. But for now, the data we have says the AI-content era is rolling back hard. Don’t be the brand learning that lesson at the bottom of the cycle.