The Media Copilot

The Media Copilot

Share this post

The Media Copilot
The Media Copilot
How to Set up a Local LLM on your Laptop — and Why You Should

How to Set up a Local LLM on your Laptop — and Why You Should

Running AI models locally guarantees privacy for sensitive info and protecting sources.

Christopher Allbritton's avatar
Christopher Allbritton
Dec 18, 2024
∙ Paid
5

Share this post

The Media Copilot
The Media Copilot
How to Set up a Local LLM on your Laptop — and Why You Should
3
Share
Running an LLM locally will let you double down on data privacy, and it isn’t as hard as it sounds (Credit: Midjourney)

There’s little doubt that generative AI from Large Language Models (LLMs) has changed how journalists and other media professionals work. Suddenly, we have writing assistants, the ability to interpret huge amounts of data when looking for insights and story angles, a chat buddy to bounce ideas off of, and an expert proofreader. 

But for most people, they interact with the AI via an online chatbot, either through ChatGPT, Gemini, Claude.ai, Perplexity, you name it. The downside of this choice is that, well, they’re online. What if there are times when you don’t want your work stored in the cloud for some AI company to chew on? 

I’m going to tell you how to set up your own LLM on your local machine and, most importantly, why and when you should want to. 

Why You Should Consider Using AI Locally

Whenever you interact with an online LLM, you’re sending data to its servers. This data gets ingested and is used for improving the AI’s understanding of the world. And while it’s exceedingly rare, data leaks have occurred, with personally identifiable information being shown to unauthorized third parties.

The data you upload could identify sources, or be private information that you don’t want officials to know you have until you’re ready to publish. The risk of any of that being exposed could not only spoil your exclusive — it might put you or your sources in jeopardy for criminal prosecution. 

That’s why some newsrooms and government agencies explicitly forbid using cloud-based LLMs like ChatGPT or Gemini with sensitive data. The risk is high and safeguards for the data are opaque. But by running an LLM locally, all your data stays on your machine. This ensures that sensitive information never leaves your control, which can be critical for investigative journalism. 

Another reason to run an LLM locally is if you don’t have reliable internet or the internet is monitored or restricted. Let’s say you’re investigating human rights abuses in Myanmar, and a source hands you spreadsheets with records of war crimes. This is an excellent use of the power of an LLM to tease out insights and find patterns. But there’s a good chance the Myanmar junta is monitoring the internet or outright blocking access to ChatGPT. Running an LLM locally, even if it’s not as high-powered as an online AI, can still let you manipulate the data with generative AI. 

With a little programming experience, a local LLM can be customized to perform only the functions you need. Don’t need image generation? No problem; save valuable storage space and other computer capabilities for trawling through those PDFs instead.

Keep reading with a 7-day free trial

Subscribe to The Media Copilot to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 AnyWho Media LLC
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share