What happens when AI trains on fake news?
A report from Newsguard exposes how far Russia will go to brainwash Western AI. What the media can do about it.
It always feels like it's been a minute in AI. Didn't we just get a whole bunch of next-generation models, like Claude Sonnet 3.7, Grok 3, and GPT-4.5? That seems like months ago. Much more pressing this week, though, is the long game of AI, which it turns out has an expert player: Russia.
More on Russia's disinformation shenanigans in a minute, but today I'm thrilled to announce the best deal The Media Copilot has offered on its courses since Black Friday. For our upcoming AI for PR & Media Professionals course, you can now buy the whole thing—all six weeks, both coaching sessions, the LinkedIn badge, and capstone project—for 30% off. Just use the code FLASH30 at checkout. But you've gotta move fast—the offer ends Wednesday night.
More detail on the course right here 👇
PR Pros: Master AI and Transform Your Career
A major AI skills gap is forming in PR—75% regularly use generative AI, yet only 38% of companies have clear usage policies. Stand out by mastering AI strategically with AI for PR & Media Professionals, a six-week live course starting March 18.
You'll learn to:
Create custom AI assistants for workflow automation.
Use Perplexity’s Deep Research for rapid media list creation.
Develop AI-powered systems for crisis management and trend monitoring.
Led by industry experts Pete Pachal (The Media Copilot), Peter Bittner (UC Berkeley), and Kris Krüg (The Upgrade AI).
🔥 FLASH SALE: Save 30% until Wednesday night. Use code FLASH30 to enroll for just $1,049 (regularly $1,499).
Don't miss your chance—offer ends soon.
Russia’s invisible campaign to contaminate AI answers with propaganda
One of the more disturbing stories in AI and media from the past little while is this report from Newsguard that highlights just how much a determined and well-funded actor can influence the information coming out of AI search engines and chatbots. According to the study, a network of websites under the Pravda banner (Pravda is the state-sanctioned media in Russia) have been deliberately flooding the internet with disinformation in an effort to ensure pro-Russia viewpoints gain prominence in AI services like Perplexity and Microsoft Copilot.
This problem, called LLM grooming, is a particularly vexing one. General-purpose large language models benefit from ingesting vast amounts of data, and these days, that's essentially the entire internet. The Pravda-spearheaded campaign, known as Portal Kombat, shows how a determined actor can corrupt that data to the point where large language models respond to queries by presenting fake news and propaganda as important facts and context.
I often warn in AI classes I teach that AI doesn’t index for the truth—it just tries to satisfy your prompt. But as a practical matter, most applications of generative AI depend on the accuracy of the outputs. The entire AI community has an interest in weeding out this kind of informational hijacking, but preventing it will require coordinated and sustained cooperation among AI providers and the media.
Fake news built to mislead AI
The reason this kind of campaign is so effective, and preventing it so difficult, is that Portal Kombat goes way beyond posting stories positive to Russia on Pravda-controlled websites. First launched around the same time as the start of the war in Ukraine, the Pravda campaign has spread across many different domains (not just “.ru” addresses) in dozens of countries.
Keep reading with a 7-day free trial
Subscribe to The Media Copilot to keep reading this post and get 7 days of free access to the full post archives.