What happens when AI fact-checks itself?
As the presence of AI-generated content grows, so does the need for quality control. The answer might be more AI.
Remember back when Apple’s notification summaries garbled the news to create completely made-up headlines? Those erroneous summaries sure could have used a fact-checker. Obviously humans could never keep up with the scale of machine-generated content, but is it possible to fight AI lies with AI verification? As newsrooms push deeper with AI tools, AI fact-checking might be inevitable—as darkly ironic as the whole idea might sound.
I dive into that subject in today’s column, but, fact-checking aside, there are plenty of other ways AI can help you as a journalist. AI can speed up all kinds of tasks, from research to analyzing troves of data. If you’d like to learn how, you’re in luck: In partnership with my friends at The Upgrade, The Media Copilot is offering a special two-week hands-on course, the AI Upgrade for Journalists, starting June 24.
It’s focused, practical training designed to make AI work for your beat, your voice, and your deadlines. And we’ve still got a few spots left! You can register here. Got questions? Just reply to this newsletter.
Now a quick word about fixing your terrible site search from today’s sponsor, Direqt, then on with the show.
A MESSAGE FROM DIREQT
Boost loyalty with smarter onsite search
Site search has long been a weak spot for publishers despite the fact that readers who use it are often their most valuable. They spend 287% longer on-site per session, view 397% more pages per visit, and generate 511% more repeat visits than other readers.
But if the experience is bad? The users bounce.
Direqt gives publishers the AI-powered search experience audiences now expect: instant summaries, cited sources, hyper-relevant answers, and natural follow-up prompts. It’s quick to implement and built to keep readers engaged.
Ready to level-up site search? Reach out to the Direqt team here for a custom demo.
The future of fact-checking might be automated
When it comes to AI car crashes, the Chicago Sun-Times' recent gaffe involving a fabricated summer reading list quickly escalated into a multi-vehicle pile-up. When a freelance writer lazily used AI to create a list of book recommendations, most of the titles turned out to be completely made up. The article breezed through a spotty editorial process—not only at the Sun-Times but reportedly at another paper as well—and ultimately reached thousands of readers. Eventually, the CEO issued a lengthy apology.
The most obvious lesson from the incident is that it sounded a badly needed alarm about the dangers of AI becoming too embedded in our information ecosystem. Yet CEO Melissa Bell resisted the impulse to place the blame solely on AI. Instead, she emphasized accountability among the humans deploying these tools at multiple points in the process, including herself. She explained how she had approved the publishing of special inserts like the one the list appeared in, assuming at the time there would be adequate editorial review (there wasn't—clearly).
The paper has since implemented changes to patch this particular hole, but the episode shines a light on a gap in the media landscape that can only get worse: As AI-generated content—authorized or not—continues to grow, the need for editorial guardrails also increases. And given the industry's ongoing quest to do “more with less,” it’s clear that human oversight alone won’t be enough. The uncomfortable answer: AI will need to fact-check AI.
Turning the tools on themselves
I know, the idea sounds terrible—like letting the fox guard the henhouse or sending Stormtroopers to mediate on Endor. But AI fact-checking isn’t a novel concept. When Google first launched Gemini (then known as Bard), it offered an optional fact-check step to cross-check its own claims. Eventually, this became part of the standard approach in AI-powered search, broadly making their results better, though still far from perfect.
Naturally, news organizations hold themselves to a higher standard, and they should. Running a news outlet carries the duty of ensuring published material is accurate. The blanket caveat that “AI can make mistakes” may work for ChatGPT, but it doesn’t cut it in journalism. That's why for most, if not all, AI-generated outputs (such as ESPN's automated sports recaps), humans check the work.
But as AI authorship proliferates, a real question emerges: Can AI do that job? Put aside the strange optics for a minute and see it as math, the key number being how often it gets things wrong. If AI can reduce errors as effectively—or better—than a human editor, shouldn't it do that job?
If you haven’t yet tried AI for fact-checking, a new tool called isitcap.com offers a snapshot of what’s possible. It doesn’t merely label statements as true or false; it evaluates entire articles for context, source reliability, and even bias. It even compares multiple AI search engines to cross-check itself.
You can easily envision a newsroom setup where an AI fact-checker flags questionable claims, routing feedback to the writer. If the “writer” happens to be an AI as well, the revisions could happen near-instantaneously, and at scale. Articles might ping-pong back and forth until they hit a certain credibility benchmark—anything below that would be kicked to human review.
It’s a compelling concept, and one that fits neatly into what some outlets are already doing with AI-generated summaries. Nieman Lab recently highlighted how Bloomberg, Yahoo News, and The Wall Street Journal are using AI to draft bullet points and highlight takeaways from articles. Yahoo and the Journal maintain a human-in-the-loop system, according to the report (Bloomberg’s oversight is less clear). These practices sit at the bleeding edge—juggling speed and reach with reader trust. One false bullet might seem minor, but when trust is already eroding, even a small error can undercut the entire strategy.
Yes, human oversight bolsters accuracy, but it also demands more staff—something few newsrooms can afford. AI fact-checking could offer those outlets a lifeline for managing their AI content responsibly. Politico’s union recently criticized its publisher for releasing AI-generated reports for subscribers based on journalists’ work—reports that occasionally misfired. A built-in fact-checking layer might prevent at least some embarrassing mistakes, like attributing political stances to groups that don't exist.
Why one mistake could undermine it all
Even if AI-on-AI review reduces hallucinations, but there's another problem that stems from increasing reliance on machines: trust. The Sun-Times debacle was far from the first AI content scandal, and it won’t be the last. Some outlets are preemptively banning AI tools from editorial workflows entirely.
Because of AI's well-documented problems, public tolerance for machine error is lower than for human error. It’s the same phenomenon seen with self-driving cars—if an autonomous car gets into an accident, the scrutiny is much greater than if the car was driven by a person. Call it automation fallout bias, and whether you think it's fair or not, it's undoubtedly true. A single well-publicized hallucination could quickly sink any AI initiative, even if it might be statistically rare.
Then there’s the cost. Running layered AI processes for writing and fact-checking could be expensive in compute resources—and leave a bigger carbon footprint. All that, just to polish machine-written text that still falls short of the investigative, source-based journalism that only humans can deliver. Sure, it might ease the burden on editors, but is that a price worth paying?
Despite all that, AI checking AI feels inevitable. Hallucinations are a baked-in flaw of generative models, and newer "thinking" models appear to hallucinate even more than their less sophisticated predecessors. If implemented properly, AI fact-checking could be more than a newsroom tool and evolve into critical infrastructure, becoming a foundational element of the internet itself. The challenge is creating systems that earn our confidence—not just automate it.
AI content isn’t going anywhere. Its volume will only grow, and we’ll need tools that can match that expansion. AI fact-checkers can help—but only if we’re willing to accept their imperfections. We may not trust AI to always speak the truth, but at least it might be able to catch itself in a lie.
A version of this column first appeared in Fast Company.