AI is pretty great at writing. Now what?
What AI's improving writing skills means for journalism. Plus: Amazon's Alexa+ reads the news, the UK backtracks on copyright, and more.
It's finally happened — I've gone viral. When I shared my post about The New York Times formally adopting AI tools in the newsroom, the LinkedIn version garnered 420,000 views, inspiring 174 reshares and 171 comments. I'm still sifting through them all. I wouldn't normally take a victory lap, but the discussion in the comments was fantastic, with several people asking great questions about the news.
One of the most pertinent: how will the Times enforce its policies, in particular the forbidding of drafting articles with AI? Today I explore the implications of keeping AI copy out of stories and consider if it's a policy that will endure, considering how quickly large language models (LLMs) are advancing.
Before that, though, I wanted to make sure you have this on your radar: I'm speaking on Friday (Feb. 28) at 1 p.m. Eastern Time on a virtual panel entitled Unlocking AI Potential for Media Publishers, part of the State of Digital Publishing's WordPress Success Week. Alongside Matt Karolian at The Boston Globe, I'll be talking about strategy and tactics that newsrooms can use to get more mileage out of their investment in AI — and the pitfalls to avoid. Check out the event site to register for the panel.
And I can't believe I haven't mentioned this until now: This afternoon (Feb. 27) I'm teaching a one-shot workshop, AI for PR and Communications, for Section School! It starts at 3 p.m. Eastern, and it's seriously going to be sick — 2 hours of hands-on demos on using AI for media training, campaign analysis, deep research, and lots more. There's still time to sign up right here, and if you like what you see, you can go way deeper with The Media Copilot course (details below).
A MESSAGE FROM THE MEDIA COPILOT AND THE UPGRADE
🚀 Master AI for PR & Media – Starts March 18! 🚀
AI is reshaping PR—are you using it to your advantage? Our six-week live course, AI for PR & Media Professionals, is designed to help you automate workflows, enhance media strategy, and create compelling content with cutting-edge AI tools.
Taught by Pete Pachal, Peter Bittner, and Kris Krüg, this course moves beyond basic prompting—helping PR pros work smarter, not harder with AI-driven strategies.
🎯 Spotlight Lesson: Media Monitoring & Crisis Management – Learn how to use AI to track public sentiment in real-time and create custom training bots to that guide how to respond to crises.
💡 What you get:
✅ Live instruction & hands-on workshops
✅ Personalized coaching sessions
✅ Real-world applications for PR & media
🕒 Starts March 18 – Spots are limited! Use the code ADVANCE15 at checkout for a 15% discount
Human bylines in the age of machine eloquence
If you've taken one of my AI Fundamentals courses, after some level-setting, I quickly get into writing with AI. There are myriad ways to approach this to get useful prose (for a really good one that keeps your writing 100% human, check out my podcast with Alexandra Samuel), but there's something that's table stakes: incorporating into your prompt some instructions to avoid the annoying corporate speak that's come to characterize AI writing.
The Media Copilot actually published the exact prompt for this some time ago (I've since refined it). But now, as we get deep into model update season, I’m starting to wonder how much longer it's going to be necessary.
In the past week we've seen the release of Grok 3 and Claude 3.7 Sonnet, and based on my initial use, they both produce excellent writing with minimal prompting. The rumor mill suggests ChatGPT-4.5 is imminent, and I'm sure it will test just as well if not better. Even if it doesn’t, GPT-4o is mostly there; OpenAI CEO Sam Altman said on X a couple of weeks ago the company had improved the model, and my sense is that it's a better writer than before (although it still puts sentences in the form of, "It isn't [X] — it's [Y] a little too often").
The phenomenon of AI talking like a first-year marketing student by default is something I chalk up to ingesting the entire internet as training data. My theory is that much of the public-facing writing on the internet — especially at high-ranking domains of major corporations — is the anodyne, wordy prose you see in your typical company brochure. If AI is truly serving up the "average" of what's out there, the style of output is going to hew closer to a Coca-Cola annual report than a cultural essay in The New Yorker.
While a room full of English majors may not agree on the precise definition of "good" writing, steering an AI toward the higher-quality writing isn't conceptually difficult. Frontier model creators have had two years to do that, and we’re now reaping the results. AI models were always superb at spelling and grammar, and now they're slowly but surely mastering the nuances of style and voice.
When quality writing becomes a commodity
I thought about this in the context of The New York Times' newsroom rules for AI, with one of the big ones being that drafting or significantly altering an article with AI is prohibited. When I shared the story on LinkedIn, more than a few people wondered, "how would they know?" AI detectors are notoriously unreliable, and given the advancements I just mentioned, probably even less so today.
I've worked as an editor for most of my journalism career, and I can say, with zero doubt, AI now has better mastery over English than 50% of the writers I've worked with (I'm including freelancers, so calm down, former colleagues). I've heard similar observations from other editorial leaders, usually pointing to Anthropic's Claude, which set the bar last year as the out-of-the-box writer among the frontier models.
The uncomfortable place this moves us toward is questioning the justification for the "no AI writing" rule (which almost all serious publications have, not just the Times). The obvious answer is hallucinations, which appear to be inherent to generative AI. The models may be getting better at weeding out errors and untruths, but it's unlikely we'll ever get to 100%, at least for raw outputs.
Consider this: Say you designed an AI system, and an editorial process around it, that would guarantee there were no hallucinations? What then? In that case, forbidding AI writing starts to feel more like a bias than a rule to safeguard quality, especially as the models improve and the writing quality surpasses the skills of an ever-greater proportion of human writers.
To be clear, none of this is to say journalism isn't valuable, and that it remains exclusively in the domain of humans. To name just a few things that AI cannot do (and shows few signs of ever being able to do):
on-the-ground reporting
build relationships
earn the trust of sources
conduct in-depth investigations
ask questions based on human experience
In addition, writing is integral to the overall process of communication, and in a fundamental way can't be distilled out of it — writing is thinking, as a wise editor once told me. But it's fair to think about writing, the product, as separate from the process when we're talking about audiences: If a reader must choose between two articles conveying the same set of ideas, one human-written and one machine-written, and the quality is the same, do they care which is which?
Creating a 'human' label for writing
I love a great Norman Mailer essay as much as the next guy, but most writing in journalism is more functional. AI already has superhuman access to information; as it gets better and better at the actual craft of writing, why shouldn't AI prose be allowed to compete in the marketplace of information?
To clarify, I'm not talking about AI replacing a human writer outright. AI has no spark, no ideas of its own, no intent to communicate. This is about whether or not humans — who do have those things — should use AI writing as a tool in that communication.
I recognize this makes people deeply uncomfortable, and it should. As models become ever-better wordsmiths, I can't help but imagine a future where the "human" label for writing starts to become similar to the "organic" label in supermarkets: Something marketable and sought after by a not-insignificant part of the audience, but ultimately not provably better for anyone's information diet. In the end most consumers of that information will simply ask, what's cheaper?
The Chatbox
All the AI news that matters to media*
Have you heard the news? Alexa has.
Amazon unveiled its new, AI-powered version of Alexa on Wednesday, enabling it to behave more like, well, ChatGPT. The Verge has a pretty good rundown of all the new features of "Alexa+," and a big one is the voice assistant's ability to act as a kind of lite news aggregator. From the announcement: "The system’s deep knowledge foundation is supported by partnerships with major news organizations including Associated Press, Reuters, Time, USA Today, Politico and many others. This enables Alexa+ to provide quick, accurate responses across countless topics, from financial markets to sports statistics."
The update doesn't come to Alexa users until March, so jury's still out on "countless," but this is a big change for Alexa, whose news abilities were previously limited to basic weather reports. Business Insider reported that Amazon was inking licensing deals with publishers until the last minute to ensure it had content for Alexa, though I'm curious how they compare to agreements for online search. Alexa doesn't have ads; instead users pay $19.99 a month for Alexa+. However, Prime subscribers get it free even though they pay just $14.99 a month. Presuming publishers get some tiny slice of that, it sounds like Amazon might actually end up losing money on this feature.
UK government blinks first in AI copyright showdown
While publishers in the U.S. anxiously await the outcome of various court cases about AI's use of copyright material, the UK seemed to be marching forward with legislation that would grant tech companies an exemption to British copyright law to train AI models, putting the burden on content creators to opt-out if they wished. That march was halted after a massive outcry from both publishers and artists, and the government is reportedly walking it back, according to The Guardian.
It was definitely an unforced error on the part of the UK government, and an uncharacteristic one considering Europe's relatively hostile stance towards Big Tech. Graham Lovelace writes in Press Gazette that a strange species of copyright deniers is rampant in the British parliament, intending to "clarify" copyright to spur more AI investment. You'd think there would be ways to do that that wouldn't incense every content creator in the country.
Pentagon's new AI can read the enemy's newspapers
The Pentagon is now using AI to read other countries' news — and it might transform how military intelligence operates. The Chief Digital and AI Office's deployment of BigBear.ai's Virtual Anticipation Network (VANE) creates an automated system to process media content in an adversary's borders to extract strategic insights, DefenseScoop reports. While the DoD remains tight-lipped about specifics (refusing to disclose the $1.3 million contract's technical details), this development represents the growing militarization of media analysis. A fair question: what constitutes "adversarial media" in an era where every publication is technically global?
Missouri newspaper shows AI done right for local news
While most media outlets cautiously experiment with AI, a family-owned newspaper network in small-town Missouri is proving local journalism can lead the way. Rust Communications has deployed an integrated AI system across its 15 newspapers that goes way beyond a mere prototype, Editor and Publisher reports, generating an impressive 30-45% increase in online traffic and significant subscription growth. Their AI assistant "Eddie" doesn't replace journalists but instead enhances their work by spotting overlooked angles and pulling contextual information from archives. Most impressively, even veteran reporters who've spent decades in the business find value in the system.
*Some items are AI-assisted. For more on what this means, see this note.