Why ignoring AI may be the bigger risk for journalists
Chatbots aren’t perfect, but neither are journalists. Working together, they can get the story straight and better serve readers, listeners, and viewers.
For today's column, I asked my friend, longtime tech journalist Sean Captain, to talk about how he uses AI, and as someone who teaches courses on how journalists can effectively use tech, I read it rapt attention. He goes into great detail on how to use AI not just as a fact-checker, but as an on-call expert to greatly improve the accuracy of explanatory journalism.
Sean's piece touches on one of the best aspects of treating AI as a collaborator: It passes no judgment. Journalists often need to calibrate their questions to the person they're speaking to, anticipating how wording or tone might alter how a source perceives them. But with AI, there's no baggage, no politics, no reputation to be damaged or enhanced. It will answer all your questions with infinite patience, on as many topics as you want.
Leveraging that aspect of AI to create better journalism is a big unlock, one that Sean has mastered. Of course, there are plenty of other ways to level-up your research, reporting, and editing with AI, and many of them are featured in my upcoming course, AI for Journalists, which begins Sept. 10. Over six weeks, the course dives into several use cases for AI in journalism—from deep research to social copy to beat monitoring to agents—all with the aim of making you a more thorough and effective journalist.
You can sign up for the course below, and if you're a newsroom looking to send a group, deep discounts are available—just reach out to me or reply to this email. I'm also hosting a LinkedIn Live on Sep. 3 at 2 p.m. ET with my course partners at the Upgrade. Tune in to see a live lesson from the course, get your own 25% discount code, and have a little fun with agents. —Pete
JOURNALISTS: LEVEL UP YOUR AI SKILLS THIS FALL
Journos, let’s be real: dabbling in ChatGPT isn’t going to get you very far. It’s 2025. If you’re not integrating AI into your research, writing, and workflow, you’re already behind.
This isn’t about chasing the tech trend. It’s about keeping your edge in a media landscape that’s changing fast: jobs are getting redefined or eliminated, tools are evolving fast, and often you're not sure where your human attention is best placed.
That’s why I, alongside my friends at The Upgrade, have built AI Upgrade for Journalists. It’s a six-week live course designed to make AI your actual assistant, not just a toy. You’ll learn to move faster, write smarter, and report with more precision than ever.
📅 Starts September 10
🧠 Weekly live sessions + 1-on-1 coaching
📈 Workflows for smarter research, faster writing, monitoring beats
🎯 What you’ll master:
• Real-time story sourcing and beat monitoring
• AI-powered outlining, editing, and data analysis
• How to keep your voice (and your ethics) intact
AI is changing the rules. Make sure you know how to play the new game.
What reporters gain from asking AI dumb questions
Sure, ChatGPT is fine to settle an argument or get travel suggestions. But hallucinations and murkily sourced model training render it too risky for reporters to trust.
That’s the common wisdom, and there is certainly truth behind it. But used with those limitations in mind, the generative AI models from Anthropic, Google, OpenAI, and others actually improve the chances of getting 100% accurate reporting—even if their raw results are not 100% accurate.
For one, they offer unlimited time and judgement-free patience to explain complex topics. They can also surface gaps in manual fact-checking. Today, not using generative AI could be the riskier course for a journalist’s professional reputation.
An on-call, sorta-expert
Employing generative AI as a teacher and fact-checker is simply an extension of what some journalists already do, when using large language models to summarize long, complex documents such as government reports or research papers. Those who go even this far employ a simple error-correction strategy: Use the summary as an initial roadmap to interesting points, then check the source to confirm them.
That provides a framework for a far more powerful application: using gen AI as a tutor to conserve the limited time for interviewing or asking follow-ups of experts. I became a quick convert to this practice last summer when writing a Fast Company article probing the dangers of lithium-ion battery fires in bikes and scooters.
For years, I’d been writing around how battery technology actually works. I might vaguely refer to how lithium ions migrate through a battery and somehow release electrons. It was hard to go further, because explanations I found were either comically simple or technically impenetrable (at least to me). What’s more, experts with any media savvy have learned to avoid going too deep with journalists who are floundering on technical details. The same challenges could apply to advanced knowledge of many topics: government, economics, finance, medicine. The result is often mangling the facts to fit childish, worn-out analogies.




