Factoblunderism: Understanding why Google's AI recommended eating rocks
Toddlers can teach us a key aspect of AI literacy: the risks of using LLMs such as ChatGPT in the same way as a traditional Google search
ICYMI, Google’s AI overview recently recommended that we eat at least one rock per day, a great AI goof and perfect opportunity to teach the Russian Roulette-esque content risk of using an LLM such as ChatGPT in the same way as finding information through Google Search. Put more simply, I define this term as factoblunderism.
Yet, to truly teach factoblunderism, which starts with the distinction between large language models (LLMs) and search, is complicated and includes words like autoregressive language modeling, transformer architecture, next token prediction and veridistortionics.
Actually no, veridistortionics is a fake word I asked ChatGPT to create to explain its challenges with language. A computer scientist probably caught me right away.
However if your mental model does not include technical Generative AI terms, you are more likely to accept my guess based on what I associated as a complicated AI term as the truth, especially if: 1) the content seems “close enough” to what you think it would be; 2) you don’t have a particular motivation (nor should you in this case) to fact check; and 3) you consider me a friendly, authoritative trustworthy source.
Similarly, factoblunderism is a term I asked ChatGPT to create to explain Gen AI’s confident expression of fact errors that occur because LLMs use association and prediction to generate content. In other words, ChatGPT does not “know stuff,” it’s guessing based on associations and predictions.
Zia Hassan, an AI and education expect and dynamic key note speaker, taught me a smart, simple, audience-centered analogy to begin to explain this risk during the recent 2024 Ragan News Media conference. Hassan explained that toddlers are great examples of individuals who come up with wrong (but cute) answers because they predict language via association.
For example, when my son Andrew was three, he learned to call his favorite toy a “truck” (correct), his favorite fruit “apple” (correct), and his favorite restaurant “vacation.”
No, we never spent the night at the old Landmark in West Chester borough. However, we did eat there one night before we went on vacation. Andrew thus associated his favorite cheeseburger and fries dinner with “vacation” which for him, became the name of the restaurant. So, if someone overheard him say, “mommy, can we go to vacation?” they may have thought “awww…he wants to go on a trip.” But, I knew that he was hungry.
Toddlers, Hassan explained, eventually grow out of this phase. LLMs do not.*
To be fair, LLMs deliver good guesses most of the time, and I’m not super worried about people eating rocks because Google said it was cool. We can catch that one because hopefully the “rocks aren’t food” is well established in our human mental roadmap.
I’m more worried about the flood of incorrect nuanced information, especially the Gen AI guesses that subtly exacerbates stigma or bias which can lead to real world consequences.
As such, I hereby coin and contribute the definition of factoblunderism to the AI literacy and communication media literature.
Factoblunderism:
Builds on the work of the great scholar Stephen Colbert’s concept of truthiness but adds the interplay of meaning creation between humans and LLMs.
Occurs when a well-intentioned but only somewhat informed human attempts to find more information a topic via an LLM in the same manner that he or she would use Google.
The content returned by the LLM is mostly correct with a few flaws based on incorrect predictions and associations. These are not “eat rocks” level, easy-to-catch flaws but rather subtleties with real-world “creation of meaning” consequences. However, because the content generated via association (like Andrew and “vacation) fits the mental model of the human seeking the answer, the returned content is quickly accepted as perceived as truth. Furthermore, since the information is presented with a tone that inspires confidence, friendliness and authority, the human is more likely to trust the response.
The human is not fully motivated to fact check all of the content…and perhaps for an understandable reason. For example, a college student working two jobs is trying to understand a complicated topic on an issue outside of her major or a coms professional juggling an unrealistic workload is up against a tight deadline and is looking a new way to explain an unfamiliar topic.
The humans in question have been exposed to a barrage of marketing and sales language from tech companies that these platforms are “revolutionary,” “groundbreaking,” “disruptive,” “easy-to-use” and will “make life easier.”
The human did not purposefully set out to create or spread mis, dis or mal information.
Unfortunately, consequences of factoblunderism range from minimal embarrassing but short-lived mistakes to the types of entrenched ideas that change society for the worse.
Hopefully, factoblunderism is a good starting point, but the topic is more complicated because Generative AI also has tremendous potential to change society for the better. But, for that change to occur, we need more AI literacy.
As Dr. Ayanna Howard, the Dean of Engineering at The Ohio State University, told Senators this week:
We can no longer sit by and not have an unprecedented investment in expanding AI training and literacy, starting from early education through upskilling of the current workforce.
But, until we get there…we have factoblunderism (Travis, 2024) and the ongoing association mystery of why three-year-old Andrew called the bright red kitchen tool that quells kitchen flames as the fire “McTinguisher.

”
*”Yet” is an important word for anyone one writes about Generative AI. From what I can tell — and please remember I am a com media person not a tech person — the idea is that these LLMs will get better at search…and lots of people think Perplexity can be a more trusted AI search….but seems like we aren’t there yet. Ask me next week as this moves fast.
Nice! Kids learn through trial and error the difference between a McTinguisher and an extinguisher. The hope is that AI will learn, too. But I worry that such factoblunderism gets "out there" early on, and readers don't take the time to follow up to find out what the truth really is.