Today’s Chicago Sun-Times gaffe shows the world how AI chatbots actually work
The “Summer reading list” published in today’s Chicago Sun-Times contained books that don’t exist.
May 20, 2025 — Here at TCAI we often get basic when talking about artificial intelligence and large language model (LLM) chatbots like ChatGPT, Grok, and Copilot.
AI chatbots are not sources of accuracy or truth. They are highly sophisticated pattern-matching machines. They are programmed to make predictions based on the regularities, trends, or recurring structures observed in their training data.
Usually an AI chatbot will respond to a prompt with a response that’s reasonably close to an accurate fact. (Q: What is the population of Canada? A: "As of May 19, 2025, Canada's population is estimated at approximately 40.1 million people.”) But it’s not uncommon for a chatbot to respond with an answer that closely resembles the patterns it finds in its data—but is not, in fact, accurate at all.
In some cases an AI chatbot will just make stuff up. These instances are called hallucinations. The problem is, there’s no way to know when a chatbot is hallucinating and when it’s providing a factually accurate response.
Oh, dear: today’s chicago sun-times
Editors at the Chicago Sun-Times were taught this hard truth today.
The May 20, 2025 edition of the venerable Windy City daily contained a “Summer reading list for 2025” as part of an innocuous summer preview insert. It’s the kind of thing most writers can knock out in their sleep. Unfortunately, it appears (though we’re not certain) that the creator of this list may have relied on an AI chatbot to knock it out. The resulting list contained:
Tidewater Dreams by Isabel Allende (does not exist)
The Last Algorithm by Andy Weir (does not exist)
Hurricane Season by Brit Bennett (does not exist)
And those were just the top three titles. The list continues…
Not every title was hallucinated. Dandelion Wine is a real Ray Bradbury title. Atonement is one of Ian McEwan’s best.
And herein lies the challenge with large language models. They tend to probabilistically mix in factually accurate outputs with made-up outputs—because to the machine, the patterns make equal sense in both situations.
editors say: not a ‘newsroom’ product
Sun-Times editors spent their morning on clean-up duty. A Bluesky post at 7:19am from the official Chicago Sun-Times account said:
transparency and disclosure would have helped
Our mission here at the Transparency Coalition is to inform the public about how artificial intelligence works, and to champion policies that ensure AI technologies are developed and used in ways which prioritize safety, transparency, and the public good.
Today’s Sun-Times error illustrates why we continue to push for increased transparency in the training data used to develop generative AI models.
That work is ongoing right now in state legislatures across America. With the level of transparency called for in bills such as California’s AB 412, Generative AI model developers will have to exercise the appropriate duty of care to curate and catalog content, especially copyrighted, that is used to train their models. Under the provisions of this bill, copyright owners will then have the ability to verify and ask either for the removal of their content or for appropriate licensing. This will inevitably lead to smaller more curated models that have been shown to hallucinate less.
We’re not hoping that AI fails. We’re not celebrating today’s Sun-Times mess. We’re taking it as a moment to widen the discussion about how AI operates, what it can and can’t do, and how it can be better harnessed as a tool to improve the human condition.