The Terror of Hallucinations
Even if generative text gets reliable, is the stigma too great for newsrooms to use it?
In almost any conversation about applying generative AI in the newsroom, you run into the issue of hallucinations. It seems that no matter how sophisticated your model is, how extensive — or small — the training data may be, or how short the actual output ends up being, there's always a risk your AI will make something up.
Even …
Keep reading with a 7-day free trial
Subscribe to The Media Copilot to keep reading this post and get 7 days of free access to the full post archives.