The Seasons of AI

If you’ve been around tech for a while, AI’s history starts to feel like a rerun. Every decade or so, there’s a new wave of excitement: This time, machines are really going to be smart. Funding pours in. Journalists write breathless stories. Then, almost on cue, reality sets in. The money dries up. Researchers go back to their labs. The cycle repeats.

It’s tempting to write this off as investor gullibility or media hype. But the regularity of these cycles suggests something deeper. The trouble isn’t just with the people funding AI; it’s with the nature of intelligence itself.

With most technologies, progress is obvious. You make a rocket go faster, you see it go faster. But intelligence is slippery. In the early days, getting a computer to play chess or do algebra seemed like it would be a sign of intelligence. But as soon as those problems were solved, people moved the goalposts. “That’s not real intelligence,” they’d say. This is the “AI effect”—once a machine can do something, we decide it doesn’t count. Intelligence is always just out of reach, like a horizon that recedes as you approach.

This is why AI progress feels so uneven. Each new technical trick—symbolic logic, expert systems, neural nets—sparks a round of optimism. The ideas are real, but the expectations are inflated. When the limits appear, disappointment sets in. The field cools off until the next breakthrough.

If you look at the history, you see this pattern over and over. The 1950s and 60s had symbolic AI: just write enough rules and the machine will think. But real-world problems proved too messy. In the 80s, expert systems promised to capture human expertise, but the systems were brittle and hard to maintain. Neural networks had a brief moment, then faded when computers weren’t fast enough and data was scarce. Each time, the underlying insight was sound, but people overestimated how quickly it would lead to general intelligence.

Now we have deep learning and large language models. They’re undeniably useful—translation, code, even poetry. But you can already see the old questions resurfacing. Are these systems actually understanding, or just remixing patterns? Will the rapid progress continue, or will we hit another wall—reasoning, common sense, something else? Will we start to witness these technologies take on a life of their own? History suggests that every boom contains the seeds of its own bust.

But the busts aren’t just failures. In the quiet years, important things happen. During the “AI winter” of the 90s, work on probabilistic reasoning and neural nets laid the groundwork for today’s breakthroughs. The hype comes and goes, but the underlying research accumulates. The real problem is that funding and attention follow the hype cycle, not the slow, steady pace of real discovery.

“AI — not just GenAI — will continue to shape industries and direct innovation.” 11/11/24

There’s a human cost to this. Researchers see their careers rise and fall with the seasons. Some become stars during the booms; others labor in obscurity, only to be proven right decades later. For students, it’s not just a question of which problem to work on, but whether they can stick it out when the spotlight moves on.

So can we break the cycle? Maybe not. Part of the problem is that intelligence is such a loaded concept. Progress in AI feels like progress toward understanding ourselves. That makes people excited, and excitement breeds overreach. Maybe we need the booms to get anything done at all. Or maybe we’d be better off with less drama and more patience.

Either way, the cycle seems built in. Each summer brings new hope and new mistakes. Each winter brings reflection and slow progress. Over time, the field advances—not in a straight line, but in fits and starts. That’s not a flaw, just how exploration works when the territory is unknown. The important thing is to keep going, even when the crowd moves on.

Reply

or to participate.