- Bits of Brilliance
- Posts
- Wittgenstein in 2025
Wittgenstein in 2025

There’s a question I keep circling around: What’s really happening when a large language model, like GPT, talks to us? It can quote poetry or diagnose code bugs, but there’s no mind behind the words. It’s just a program predicting the next token. But it feels like more. Why?
Wittgenstein had a way of thinking about language that cuts to the heart of this. For him, meaning wasn’t in the words themselves, but in how we use them—in the games we play with language. When a child learns the word “dog,” it’s not just about the sound or the spelling. It’s about pointing, petting, being licked, being scared, being comforted. The word gets its meaning from the mess of life around it.
Wittgenstein’s idea of a language-game in Philosophical Investigations isn’t just about words—it’s about what we’re doing when we use them. Language isn’t a code we translate in our heads; it’s more like a toolset we use in the middle of some activity. He gives examples: giving orders, describing something, reporting a fact. You don’t understand the word until you see the game it’s part of. Take “Water!”—depending on the situation, it might mean “bring some,” “look out,” or “I’m thirsty.” The point is, meaning doesn’t live inside the word. It lives in what you’re doing with it.
But when an LLM spits out “dog,” it’s just picking the word that statistically fits best. There’s no pointing, no petting, no world. The model is playing the game in a very narrow sense: it can produce the right move, but it doesn’t know what the board looks like.
You can see this most clearly when the model tries to explain a joke. It can lay out the mechanics—the pun, the twist—but does it get the laugh? Or in a conversation about grief, it can generate the right phrases, but there’s no feeling behind them. It’s like watching a puppet move its mouth in perfect sync to a song it’s never heard.
Some people argue that if the imitation is good enough, it doesn’t matter. Most social interactions are formulaic anyway. Maybe that’s true for some things. But Wittgenstein’s point was that language games are anchored in life. When a doctor comforts a patient, or friends share an inside joke, it’s not just words, but a whole history and intention behind them. The AI has none of this. It’s a map with no territory.
You might say meaning is about effects. If an LLM can comfort, inform, amuse, is that enough? Maybe. But there’s a difference between a child learning words and an LLM mimicking them. The child is in the world, learning by living. The LLM is outside, forever guessing the next word.
This isn’t just a philosophical curiosity. It matters for how we think about AI. Is it enough for a model to generate plausible outputs, or does real understanding require a connection to the world? If LLMs are just surface echoes, can we trust them, or even interpret their responses in human terms?
Maybe future AI will close the gap by connecting language to perception, action, or even feeling. Or maybe imitation will always be just that: mimicry without meaning. Wittgenstein reminds us that language is more than rules and outputs—it’s a living, human activity. Until AI shares in that, it can play the game, but it isn’t really a player. The difference between performing the pattern and grasping its purpose may be where the real frontier lies.
Reply