Redrawing the AI copyright battle map
Recent courtroom wins for Meta and Anthropic seem like setbacks, but they might point to a winning strategy for the media.
For the media’s relationship with AI, sometimes I think we’re all just waiting for the courts to rule definitively on whether it’s OK (well, legally justifiable) for AI to consume and repurpose “publicly available” content. Of course, it’s never as simple as a single answer, but a couple of recent courtroom decisions have started to better define the edges of the issue. Although they broadly appear to favor AI companies, they also reveal weak spots in their “fair use” defense that content creators and publishers might exploit.
More on that in a minute, but first I’d like to talk about what I’ve been up to these past couple of months. If you’ve read the newsletter for a while, you might remember that I held a “prototype” course on AI for Journalists last month. That was a step toward perfecting the material a comprehensive, six week course that I’m launching in the fall with my partners at The Upgrade. Far from just a bunch of demos, AI for Journalists will walk you through multiple use cases, complete with one-on-one coaching, plus a capstone project, to ensure you don’t just see what AI can do—you actually do it.
The course is a natural complement to our existing six-week course, AI for PR and Communications Professionals. Both courses begin the week of Sept. 8, and if you book now, you can save big. We’re offering a 25% discount on both programs until July 31. If your newsroom or agency has a group of three or more, we can offer a group rate that’s even better—feel free to reach out (or just reply) for details on that, or if you have any questions about the courses.
If you need a program that’s customized for your team, we can do that, too.
Here’s a little more detail (and those discount codes), and then let’s get into the legal drama (no Chatbox this week—it’ll be back next) 👇
A MESSAGE FROM THE MEDIA COPILOT
Most people are still treating AI like it’s a clever party trick. Ask it to write a tweet, generate a blog outline, maybe crank out a press release, and boom, done. But if that’s the limit of your AI game, you’re falling behind.
It’s not about using AI. It’s about using it well.
That’s why we’ve built new courses—one for journalists, one for PR pros—designed to really transform the way you work. This isn’t just demos. This is the strategy layer. The workflow layer. The is the part where you start really unlocking those productivity gains you hear about, all with your voice, your insight, your value intact.
Both courses kick off the week of Sept. 8. Both last six weeks, are 100% live (with recordings), and very hands-on. And both come with 1:1 coaching and a capstone project guaranteed to level-up your personal workflows.
💥 Limited offer: Get 25% off either course if you enroll by July 31.
🔹 Journalists: Use code AIJO25OFF-SEP (checkout link)
🔹 PR & Comms: Use code AIPR25OFF-FL1 (checkout link)
👉 If you’ve been waiting for a signal that it’s time to upgrade the way you work with AI, this is it.
Recent copyright rulings on AI might actually be good news for publishers
The courtroom battles over AI and copyright are intensifying.
Two rulings handed down recently have shifted the course of the legal fight between AI companies and content creators, and on the surface, they appear to benefit the former. First, Anthropic secured a favorable result in a case probing whether it could cite “fair use” for ingesting large archives of books to train its Claude AI models. Then, a federal judge sided with Meta, concluding the company hadn’t infringed on the copyrights of several prominent authors who sued over their works being used to train its Llama models.
At first blush, this appears grim for authors and content creators. Though neither decision sets a binding precedent (the Meta judge even took care to stress the case's narrow focus), two rapid-fire rulings falling clearly in favor of AI firms sends a strong message: that “fair use” may be a reliable legal shield for AI companies—potentially even in high-profile cases like those involving The New York Times and News Corp.
But, as always, the picture is more nuanced. The judgments in both cases were more mixed than they appear, and, importantly, they’re instructive. Far from slamming the door on copyright holders, they spotlight the cracks where legal challenges could break through.
What the Anthropic case clarifies about AI inputs and outputs
First, a quick disclaimer: I’m not a lawyer. What follows is my take as a journalist and media executive who’s spent two years closely tracking these issues. If you or your organization is considering litigation or actively involved in one of these disputes, consult a qualified copyright attorney.
That said, a little background: U.S. copyright law clearly defines “fair use” as a defense to infringement claims. Pretty much all major AI developers rely on it. Whether the defense holds up depends on four factors:
The purpose and character of the use, or whether it’s commercial or noncommercial. Noncommercial uses get more leeway, but there’s no ambiguity here: building AI models is very much a commercial endeavor. Courts also weigh whether the use is “transformative,” meaning it adds something new rather than merely copying.
The nature of the original work. Creative works tend to get more protection than factual ones. AI models train on both.
The amount and substance of what was copied. Brief excerpts are usually fine, but training often involves ingesting entire works. Courts have occasionally accepted this, provided the outputs don’t echo the full work or lift large chunks verbatim.
Whether the copying caused harm to the market for the original work, a key issue in many of these cases.
The Anthropic ruling drew some lines between what was OK and what wasn’t. For one, if the books were lawfully purchased, the judge concluded that training an AI model on them fell under fair use. However, if any of those books were pirated, that would constitute infringement. Given that some almost certainly were, Anthropic could still face consequences for using illegally obtained materials (the company is appealing this part of the ruling).
A critical takeaway: this case focused on inputs, not outputs. It addressed whether simply copying a vast number of books is itself a violation—regardless of what the AI eventually produces. The answer, at least in this case, was “no.”
The judge leaned heavily on the landmark 2015 Authors Guild v. Google decision, which held that Google’s digitization of books for its searchable database fell under fair use. The Anthropic ruling extends that thinking into AI, though it’s worth noting that Google Books only ever displayed excerpts—not full books.
That matters. A superficial read of the case might lead one to think that simply purchasing a digital product—say, an annual subscription to The Information—grants license to do whatever you want with the content. But that’s not the case. But buying a subscription permits access and reading, not wholesale copying for repurposing. Courts have not ruled that subscribing grants AI training rights, even if some might assume so.
The missing link in Meta’s case: market harm
The Meta ruling has more to say about that, specifically around that fourth pillar: market harm. The judge ruled in Meta’s favor because the plaintiffs—including Sarah Silverman and Ta-Nehisi Coates—couldn’t demonstrate a drop in book sales.
This effectively gives AI developers the green light to train on copyrighted material as long as there’s no provable commercial damage. But the flip side is equally true: showing market harm is a viable legal path for content creators.
That’s exactly what happened earlier this year. In February, Thomson Reuters notched a victory over Ross Intelligence, a now-defunct AI firm. The court rejected Ross’s fair-use defense for using content derived from Reuters’ legal research service, Westlaw. Because Ross’s product competed directly with Westlaw, the court saw clear market harm.
Taken together, these three cases start to form a legal roadmap for publishers looking to challenge AI companies on copyright:
Focus on the outputs. It’s not enough to show your content was scraped. Plaintiffs need to prove the resulting AI outputs resemble their original work. No court has definitively ruled whether AI-generated content is “transformative,” but it should be noted that courts have ruled in the past that a copyright violation can occur even when small parts of the work are copied—if those parts represent the “heart” of the original.
Show economic damage. With mounting data on how AI tools like search engines and chatbots affect news consumption, the case for market harm is more compelling than ever. Licensing deals between media outlets and AI companies also strongly suggest that there’s market harm by creating outputs without offering such a deal.
Scrutinize the source. Was the material legally obtained? The Anthropic ruling shows that if a company scraped content through a paywall—without a subscription—that may be enough for a claim, even if the output never sees the light of day.
Turning bad news into a better legal strategy
Copyright law in the age of AI is evolving fast, and other pending cases may chart new territory. And there’s always the chance of intervention from regulators or Congress. The Trump administration has been anything but silent: it recently fired the U.S. Copyright Office head, reportedly over the agency’s evolving AI stance, and solicited public feedback on AI policy. Unsurprisingly, OpenAI and Google used that moment to advocate for embedding their interpretation of fair use into law.
For now, though, creators have a clearer sense of how to mount a winning legal argument. These rulings don’t shut the door on copyright lawsuits. But they do show that sweeping claims like “AI steals everything” won’t be enough. Plaintiffs need to show that AI outputs substitute for their work in the market, that financial loss is demonstrable, or that pirated content was used in training. The courts aren’t saying “don’t sue”—they’re outlining how to sue effectively.
A version of this article first appeared in Fast Company.