Does AI make you dumber?
By outsourcing parts of our decision-making to a machine, we may be seeding a massive, long-term cognitive problem.
As more companies adopt AI, there's naturally a lot of concern over jobs. But there's a subtler concern about AI's effect on work beyond the obvious workforce reductions: Will our increasing reliance on machines to make decisions lead to our own cognitive decline? Is there a way for media professionals to embrace AI without taking a hit to critical thinking?
I ponder that question in a moment, but first a quick reminder that I'll be at the PRSA conference in New York City on Thursday where I'll be leading a very hands-on session on the use of AI in PR at 3 p.m. If you're at the event, please swing by and say hi!
Also, because you demanded it, my live intro class, AI Quick Start, is back for July. It's happening on Thursday, July 17, at 1 p.m. ET. Are you struggling to get good outputs from ChatGPT? Not sure how to use it to become a better writer, reporter, or editor? This class will get you on the path to using AI for real journalism and communications workflows. And the class is just $49! You can sign up here.
Now a quick word from our sponsor, Direqt. They’re keeping this post free, so please check out their AI-powered site search if you’re in the market. Then let's talk about how AI may be softening our brains.
A MESSAGE FROM DIREQT
Boost loyalty with smarter onsite search
Site search has long been a weak spot for publishers despite the fact that readers who use it are often their most valuable. They spend 287% longer on-site per session, view 397% more pages per visit, and generate 511% more repeat visits than other readers.
But if the experience is bad? The users bounce.
Direqt gives publishers the AI-powered search experience audiences now expect: instant summaries, cited sources, hyper-relevant answers, and natural follow-up prompts. It’s quick to implement and built to keep readers engaged.
Ready to level-up site search? Reach out to the Direqt team here for a custom demo.

Can media pros rely on AI without eroding judgment?
As amazing as AI can be, most people who use it, even in advanced ways, will admit it doesn't actually come up with anything original. Sure, it can see patterns in data and connect pieces of information with each other to create content that's never been written before, but when you examine the actual ideas, they're not really breaking new ground.
This is related to another truth about AI: that, out of the box, it has more to offer a novice who wants something passable than a professional who wants something good. That's not to say AI has nothing to offer professionals, of course—you just need to use it more deliberately, more collaboratively, and with better context.
That requires discipline and training (call me!), but there's another issue with AI use when you play out that last paragraph over time: that using AI to create passable stuff can sabotage the skills you need to acquire professional judgment. The more you lean on machines to complete cognitive tasks you once might have done on your own, the more the mental muscle for that task gets weaker. The broad phenomenon of GPS weakening people's sense of direction is evidence of this. But in case that’s not convincing, MIT just published a study that showed people who used ChatGPT to complete tasks for them had significantly lower brain activity than those who didn’t.
Why AI is different
We can, as a society, survive offloading navigation to GPS, and probably many other singular mental tasks. It starts to become a serious problem when the mental muscles you develop by doing those tasks support more complex thinking and strategic decision-making. Projecting that over the long term and in the aggregate, that could have big consequences for the workforce, and the quality of work in general.
That seems to be the point of this Wall Street Journal piece that highlights Amazon CEO Andy Jassy's comments about AI. Jassy advanced a fairly common view of AI, that it would free many employees from rote work. This would then enable them to "think strategically" and refocus around tasks that only humans can do. It's not exactly a radical take, but Jassy may have said the quiet part out loud when he predicted Amazon's workforce would shrink as a result.
Amazon's employee count aside, the big concern is that relying on AI for all our rote work will make workers intellectually less nimble, reducing their ability to think critically and deadening the quality of work overall. That's because developing the skill to think strategically in any particular field is usually informed by experience—i.e., building the mental muscle around rote tasks.
This gets at why AI, on its own, isn't a very good artist. In a viral essay published last year, Ted Chiang criticized the notion that you could separate the idea of doing something from the act of doing it—a stance implied by Sam Altman's recent declaration that AI is empowering "idea guys." It's in all those so-called rote tasks of kneading the clay, painting the brushstrokes, or choosing specific words—all the tiny, individual decisions—where art is created.
When I first responded to Chiang's essay, I postulated whether you could categorize the tasks where some could be offloaded to AI while other, more central ones would remain human. That might be a starting point, but now I believe the person's experience level matters even more. In most areas of expertise, mastery is all about graduating to more challenging tasks. The reps you did early on were important, but you don't keep doing the same routine on and on.
Earning the right to use AI
Still, if business were all about skill-building, we wouldn't use AI for anything. The use of AI is driven by economics, and finding the right ways for a workforce to use AI without leading to long-term brain rot will be a challenge for every industry, including the media. A tiered system might be the best option, one where usage guidelines for AI vary not just by the task, but also by the experience level of the person using it.
An example of this in a newsroom would be requiring subject matter experts to edit AI-written copy rather than general copy editors. It might also mean interns and newbie reporters would complete a number of "rewrite this press release" assignments by hand before they're allowed to integrate AI into the process. The point is that use of AI needs to be considered across more than one dimension: not just how much efficiency is gained, but also whether the team is maintaining the skills they need to audit the outputs.
I took a stab at creating a flowchart to map out a decision process for considering whether or not to use AI in a particular task. I'm not advancing this as some kind of template, and it's certainly not comprehensive (it leaves out considerations around cost, data privacy, and other factors), but it may be a helpful starting point when thinking about how AI tool use might vary by experience.
Note the one caveat, for experts using AI on a complex task—that it should be used in a collaborative way. This is where things like charts or tables can't capture the nuance of applying AI to every workflow. AI, by its nature, is very open-ended software, and there are any number of ways to use it, even for the same deliverable. Someone with experience would need to collaborate with AI in an iterative manner to find a conclusion rather than just prompting for an output, which would often be the case with rote tasks.
So how you use AI can be as important as the decision to use it at all. AI is an incredible accelerant, but if we're not careful it could accelerate intellectual decay along with productivity. For employees to build and maintain the skills they need to think critically, employers need to have a point of view on what disciplined use of AI looks like. Yes, AI does outsource some decision-making. But the decision to let it soften our minds is still ours.