The trouble with slop
In what some are calling the "Slop-pocalypse," a podcast startup is cranking out 3,000 podcasts a week. What should Apple and Spotify do?

The term "AI slop" gets thrown around a lot, and I made an attempt to better define it earlier this year. Briefly: Not all AI-generated content is slop, but virtually all slop today comes from AI. Quality—or rather the lack of it—is a factor, but that's subjective, so how do you properly gauge what’s slop and what isn’t?
There's a solid case to be made that if the content can find an audience—one that doesn't just consume it, but does so over time and with some degree of enthusiasm—that would be a solid indicator the content isn’t slop. And that would seem to be what Inception Point AI is arguing with respect to its AI-powered podcast platform, which utilizes over 150 different tools to crank out 3,000 AI-generated podcasts every week at a cost of less than $1 an episode, first reported by The Hollywood Reporter.
A quick recap: Inception Point conceives, produces, and publishes podcasts completely end-to-end with AI. The company does not hide the nature of the content or its process. The synthetic hosts of its programs, including “Claire Delish” of the show BBQ and “Nigel Thistledown” of Gardening, all identify themselves as AI-generated.
As for why, the idea is somewhat similar to the audio overviews of Google's NotebookLM, which can whip up AI-voiced podcasts from whatever topic you give it, even if it’s just the travel log from your family vacation. Inception Point is conceptually the same, but the podcasts are public. It selects niche topics that would likely never be financially successful (or even sustainable) given their tiny audience and the normal costs of creating a podcast.
While some have expressed respect for the company's novel and ambitious approach to creating content with AI, most in the media have reacted with jaw-dropping disbelief and scorn. SmarterX CEO Paul Roetzer didn't hold back on his popular AI podcast when he said "I hate this." Media commentator Nick Hilton wrote on Medium that he believes what Inception Point is doing will wound any trust the company might have with listeners and advertisers, ultimately torpedoing the effort. And responses on X were brutal, with one user declaring, "the AI slop-pocalypse is here."
The stunned revulsion is palpable, but hold on a second. As I said at the outset, AI-generated content isn't necessarily bad content. ESPN, Fortune, and Time have all published content generated via AI, and the world kept turning. While those moves didn't entirely escape criticism, they're generally seen as forward-looking experiments or logical first steps toward an AI future. Inception Point AI may be further down the timeline of AI content, but is it really conceptually any different?
Still, I think the pushback is more than just the default knee-jerk antagonism that creatives tend to have against anything synthetic. This is what "the cost of content going to zero" looks like, and it's challenging on a lot of levels. Inception Point has opened a Pandora's box. Even if we assume Inception publishes only content that passes some kind of quality bar, you can bet others will follow in their footsteps who won't be as discerning. Just as the low cost of publishing to the internet led to content farms, reducing podcast production to a button-push means audio slop cesspools—generating content en masse to flood discovery algorithms with no regard for whether anyone wants it— are inevitable.
The challenge, then, is on the distributors (Apple and Spotify, in the podcast space) to filter the content appropriately. And that doesn't necessarily mean simply tossing everything AI-generated. Because if a niche podcast about pollen reports hosted by Honey B. Hives actually serves a tiny group of listeners well, why should that be downranked or made invisible? The audience and their engagement should have the final say over whether any AI-made production is worthy of inclusion.
I find it highly unlikely that any Inception Point podcast will ever enjoy the success of say, Serial. Most podcasts aren't just conveying information; the human connection between host and listener is a big part of a program's value. Inception Point AI is about to test that assumption, though, or rather whether there’s room in the ecosystem for functional, informational podcasts where that connection isn’t present. In a world where anyone can push a button and publish a podcast, the real question isn't how it's made—it's whether anyone sticks around to listen.
Learn Practical Ways of Using AI in Journalism and PR
AI isn’t a fad. For most journalists and PR professionals now, it’s simply the reality. Yet many reporters and executives are still stuck dabbling: using ChatGPT for story ideas or a few headlines and calling it a day.
It’s time to upgrade.
Coming up, The Media Copilot is offering a one-hour power session on how to actually use AI for real media work. Research smarter. Pitch sharper. Write faster—without losing your voice to the algorithm.
Here’s what you’ll walk away with:
The real state of AI in media—no hype, just how it’s changing the game.
A simple way to approach prompting to get what you actually want.
Pro-level use cases: Find better sources. Focus your stories and campaigns. Protect your data.
📅 Live: Sept. 21, 1 p.m. ET
💸 $49: The cheapest upskill you’ll make all year.
🎁 Extras: Slides, prompt templates, and a quickstart AI guide to keep you sharp.
👉 Don’t just survive AI. Use it to do better work.
The Chatbox
All the AI news that matters to media*
Publishers finally strike back against Google
Penske Media's lawsuit against Google marks a pivotal moment in the fight over AI's impact on media economics. The publisher behind Rolling Stone and Variety alleges Google's AI Overviews feature creates an impossible choice: allow the tech giant to use your content for AI summaries or disappear from search results entirely. This isn't just another copyright dispute—it's a direct challenge to Google's practice of tying search indexing to AI usage rights, effectively forcing publishers to subsidize their own replacement. While other publishers have sued AI companies over training data, Penske is targeting the specific mechanism that lets Google leverage its search monopoly to extract value from journalism without compensation. The outcome could determine whether publishers have any leverage against tech platforms that harvest their content to answer user questions without sending them to news sites. (AI-assisted)
AI makes fake PR campaigns scalable
A scheme exposed by Press Gazette reveals how AI has made fake news manufacturing disturbingly easy and scalable. Three connected PR agencies are flooding British newsrooms with hundreds of AI-generated releases featuring phantom experts like "Pete Nelson" and "Daniel Harris" who have no verifiable existence. Major outlets including the Mirror and New York Post have already been duped into publishing some of their fabricated stories, which exist solely to boost client SEO rankings through earned media coverage. The operation demonstrates a new threat to journalism's credibility: bad actors can now mass-produce convincing press materials faster than newsrooms can verify them. (AI-assisted)
Publishers get payment framework minus the power
IAB Europe's new technical framework for AI platform payments is a serious attempt to create standardized compensation for content scraping, but the challenge lies in enforcement. The framework proposes cost-per-crawl APIs and real-time content pricing through bid-response exchanges. While the technical infrastructure sounds promising, the system's success hinges entirely on voluntary participation from AI operators who currently face no legal obligation to pay for content access. However, the framework may help legitimize what's already happening organically, as publishers increasingly negotiate individual deals rather than rely on industry-wide standards that lack regulatory backing. (AI-assisted)
USA Today's chatbot bets on homegrown answers
With DeeperDive, which embeds an answer engine directly on USA Today's homepage, Gannett is essentially betting that readers will prefer getting AI-generated answers from a trusted source rather than bouncing to external AI tools like Perplexity or ChatGPT. The strategy: instead of losing traffic to Google's AI Overviews, Gannett keeps users on-site while mining its own journalism archive. What's telling is that DeeperDive only draws from Gannett's content and avoids generating responses when sources conflict—a conservative approach that prioritizes accuracy over comprehensiveness. The real test will be whether this defensive move actually retains readers who might otherwise head to more comprehensive AI search engines, or if it simply delays the inevitable shift toward external AI tools that can synthesize information from across the entire web. (AI-assisted)
*AI-assisted Chatbox items are created with AI and then carefully edited by Media Copilot editors.