- Bits of Brilliance
- Posts
- Teaching AI Literacy
Teaching AI Literacy

People keep talking about AI literacy, but I don’t think most of us know what that actually means. On the surface, it sounds like another tech skill: explain what AI is, how it works, what it can do. But AI isn’t just another tool like a spreadsheet. It’s not even like the web, which mostly followed rules you could learn. AI is more like an organism that keeps mutating. Teaching someone to understand it is like trying to teach them to swim while the water keeps changing into new shapes.
The core problem is that AI doesn’t just do what you tell it. It generates things you didn’t ask for, or didn’t expect. Sometimes it’s creative; sometimes it’s weirdly wrong. So you can’t just memorize a list of features and call it a day. Teaching AI literacy isn’t about facts, because the facts keep changing. It’s about preparing people for a moving target.
There’s another complication: AI literacy isn’t just technical. Sure, you need to know what a neural network is, or what “training data” means. But that’s just the scaffolding. The real issue is how these systems reflect the world—sometimes amplifying its biases, sometimes changing how we work or create. If you’re teaching a high schooler about language models, it’s not enough to say, “They predict text.” You have to talk about how they might reinforce stereotypes, or what it means when a machine can write a poem or answer an essay question. It’s less like teaching someone to use a tool and more like teaching them to read critically. Not just what the words say, but what they mean, who wrote them, and why.
There’s a paradox here. The easier AI gets to use, the less people understand about how it works. Calculators used to be magic; now they’re invisible. Most people don’t know how they work and don’t care. That’s fine for arithmetic, but AI is different. If you don’t know what AI can and can’t do, you can end up trusting it when you shouldn’t, or missing opportunities you could have taken. But if you focus only on the dangers, you just make people afraid. The real trick is to teach both skepticism and curiosity. Neither blind faith nor knee-jerk cynicism.
Some teachers already let students use chatbots to brainstorm or write drafts. That’s probably a good thing, as long as you talk about what’s happening. It turns the AI from a black box into something you can poke at. But it also raises questions: Are students just outsourcing their thinking, or are they learning new ways to think? It’s like the old debates about calculators, but the stakes are higher, because the boundary between tool and thinker is blurrier.
Then there’s the issue of access. Not every school can afford the latest tech, or teachers who understand it. If only some kids learn how AI works, the gap between the haves and have-nots gets wider. And if we only teach AI from one perspective—usually Silicon Valley’s—we risk missing how it affects different cultures or communities. True literacy means not just knowing how to use AI, but knowing who benefits, who doesn’t, and why.
Getting this right matters. If we teach people well, they’ll be able to adapt as AI changes, instead of getting left behind. If we get it wrong, we’ll have a generation of people who either trust AI too much or are afraid of it for the wrong reasons. The only way I see to keep up is to teach people how to ask good questions, how to adapt, and how to learn as the technology changes. That’s the real skill: meta-literacy, not just about today’s AI, but about learning itself.
So the problem of AI literacy is really the old problem of education: how do you prepare people for a world you can’t predict? The answer isn’t a fixed curriculum. It’s teaching people to be curious, skeptical, adaptable, and above all, willing to keep learning. AI will keep changing, but that part won’t.
Reply