The risks of over simplifying AI education
Unchecked AI analogies about tech and content creation can lead to bad solutions.
You’ve probably heard the Buddhist parable of the five blind men and the elephant: each sightless individual feels a different part of the elephant and then swears their limited and experience-centered definition is correct and suspects that alternative explanations are dishonest.
The story, depicted by my crude yet hopefully cute ChatGPT generated image, illustrates how subjective experiences don't always capture the full truth. Stories and analogies are starting points to explain complicated topics. True analytical depth demands effort, exposure, active listening, and embracing discomfort.
In other words, AI is an elephant of uncertainty, yet most of us are walking around with blindfolds defining the technology by our limited experience. I’m not surprised, because humans abhor the unknown and use communication to reduce those fears. Creating analogies is one way to simplify complicated information which provides some “certainty relief.”
But these analogies must only be considered as the starting instead of end point. Because (in the words of all parents ever), “if I’ve said it once, I’ve said it a million times,” inaccurate frames can lead to decision-makers delivering misguided solutions.
During the past year, I’ve heard several analogies or stories to explain the technology and its use which — like the blind robots feeling the elephant — both help and hurt our understanding. Communicators and educators must pay attention to and challenge these inaccurate frames less they spur unintended or incomplete solutions.
Here are a few….and the good and limiting aspects of each.
AI is a Tool
‘It’s just a tool,” someone will say to me…frequently with a look of relief on their face prompted by that uncertainty reduction because certainly we can all agree with this statement. Because — and I agree — thinking about the potential job loss for content creators is absolutely terrifying.
Good: Positioning AI as a tool combats the idea that humans should rely on AI for creativity. The word tool also communicates that humans should have the power over generative AI. We are still needed to use these tools.
Limiting: What type of tool? A hammer is a tool, but I wouldn’t use it on glass. Can anyone use these tools? For example, SPSS is a statistic tool I learned during my doctoral program, but trust me, no one wants to rely on me to calculate probability.
Is AI one type of tool or more like a tool box? If so, should we train students and employees on these tools? What conditions are required for these tools to work effectively? Should everyone have access to these tools?
Did these tools steal from other tools? If I use this tool, will another tool have my data? Will there be a preference from employers for new graduates who already use these tools? (The answer is — as of April 2024 — yes.) Also, if it’s a tool, who is responsible for its use? The content creator? The tech company who programed the application?
We just use AI for brainstorming
Good: The speaker wants to reassure me that the content they create is generated by a human being and that I should feel comfortable knowing they would never let an AI dictate content but perhaps ideas from AI might spark creativity.
Limiting: I’m in the “Generative AI Forward Camp” but, if a writer tells me they use GenAI for brainstorming, I become concerned quickly. Human-sparked idea generation keeps content fresh and memorable and will likely result in content that standouts in a sea of ChatGPT gobbledygook. For example, I’ve asked students to use generative AI to come up with attention getters for speeches. At least half of those attempts returned similar and blah “image a world” sentences.
I’d rather have a writer tell me they seek inspiration from conversations, their senses, the historical plaques they stop to read, the artwork that caught their eye, the wonder drawn from watching other humans at play, that vibrant dream last night, or observations gathered from stillness.
I feel better if a writer tells me they bring in Generative AI after idea generation to speed their ideas into reality. To be clear, I’d have to see the writer’s process, but a writer’s powerful sense of observation is a superpower unmatched by computers. Why give it away?
We use AI to create outlines.
Good: I appreciate the reinforcement that generative AI’s role should be limited in content creation. And, starting from middle school almost everyone — except for me and other eager young writer nerds— hates outlining so it makes emotional sense to assign AI a task that few relish. I also appreciate — and have used — AI to speed responses to established templates.
Limiting: On the other hand…are you kidding me? Outlines deliver the organizational umpf that changes attitudes and behaviors, and a sound organizational pattern makes a message stick. Outlines depend on a review of the ye ole rhetorical situation: audience, message, and context. Hint: if you only want to perform, focus on delivery. If you want change, you need to focus on message and its delivery.
Generative AI could help with prompting variations of outlines provided a human expert has knowledge of specific patterns for specific audience dispositions in specific contexts during specifics times. Furthermore, I find content creators assigning AI to roles they don’t use. For example, a writer might say it’s good for video production. I don’t like this game of AI hot potato.
Learning AI Should Be Like Drivers Ed
Good: Emphasizing that everyone who wants to use this technology requires training is important. Creating a comprehensive and measurable standard through which this learning takes place is also great. Starting generative AI education in the teens years is also smart though I’d argue we should pursue age appropriate middle and early grades education just as we teach little ones to look both ways before crossing the street.
Comparing AI education to drivers ed also acknowledges that this technology will be a part of modern life. Unless you’d prefer horse and buggy, at some point, you will have to learn to drive — or hire someone to drive you — to get somewhere inaccessible to public transportation or too far to walk. Similarly, so many experts have noted that generative AI will become common place in our daily lives and could be a societal equalizer. So, earning your “AI license” could open opportunities.
Limiting: Perhaps comparing AI to drivers ed is a good introduction to the environmental impact of AI. After all, both cars and generative AI come with a carbon cost. In addition, we generally know how a car works, and in many U.S. cultures, drivers ed usually comes as a positive, right of passage. Most of us can trust our drivers ed knowledge, and I can find accurate information online from its manufacturer. But, we don’t know much about how tech companies have trained their AI modules. Therefore, the information we experience risks algorithmic bias and just plain ole wrong information known as hallucinations.
So, what’s the right answer? There isn’t one. Analogies are helpful, but they are a starting not ending point. To draw from another cliche, think of it as a “yes/and” scenario. Yes, AI is a tool and we need to understand it more. Yes, AI can help with content creation provided this effort is led by a human. Yes, we need to teach AI, but we need to carefully think out our lesson plan.
After all, the last thing we want is solidifying an analogy in the human consciousness that makes an employer think “hmmm…do I really need my human writers?” Let’s take off our blindfolds and accept that this is one complicated elephant with tremendous potential for good and harm

.
Need help solving a communication problem? Want to shine during your next presentation? Need a keynote on generative AI? Contact me at eryn@travisnco.com or visit travisnco.com.
"Driver's Ed" is a great analogy. It's here, it's growing and your current mode is becoming increasingly obsolete. If only "AI Ed" was a simple as reading a book and taking a test.
The blindfold on the elephant is the first thing I saw. What a great example of how AI sometimes can't be taught! It reminds me of a child whose parent has told them to do something and they refuse.