The Summer AI Was Born

There’s a story people like to tell about the birth of artificial intelligence: in the summer of 1956, a dozen researchers met for a few weeks at Dartmouth, and somehow, an entire field was born. It sounds suspiciously tidy—a handful of mathematicians walk into a classroom and walk out having invented a new science. But that’s not how things usually happen. Most fields don’t have a birthday. They grow slowly, through arguments and accidents, until one day you look up and realize there’s a new discipline where before there was only a collection of problems.

So what really happened at Dartmouth? The participants didn’t solve intelligence. They didn’t even get very far. But they did something more powerful: they named the problem. John McCarthy, Marvin Minsky, Claude Shannon, and the others took a vague hope—that machines might someday act intelligently—and declared it a mission. They called it “artificial intelligence.” That act of naming was like planting a flag on an unexplored continent. Suddenly, what had been a scattered set of questions—about reasoning, learning, perception—became a field. You could apply for funding. You could start a lab. You could say, “I work on AI,” and people would know what you meant, or at least pretend to.

It’s easy to forget how strange this sounded in the 1950s. Computers were the size of rooms. Most people thought of them as fast calculators, not potential minds. Alan Turing had asked, “Can machines think?” a few years earlier, but it was still a fringe idea. The Dartmouth group was betting that intelligence was just computation, and that if you programmed a computer the right way, it would behave intelligently. They thought they could make real progress in one summer.

They were wrong, of course. The early prototypes—programs that played checkers or solved puzzles—were trivial compared to real intelligence. But the point wasn’t what they built. The point was that they’d made it official: this was a thing you could work on. The name “artificial intelligence” both inspired people and confused them. Was this really about building minds, or just about making useful programs? Was AI an engineering problem, or a philosophical one? That ambiguity has haunted the field ever since.

If you look at the history of AI, you see cycles of excitement and disappointment. People made big promises, then ran into hard problems and got stuck. The field has spent decades bouncing between optimism and disillusion. The original founders underestimated just how much of intelligence comes from the messy business of being human—bodies, culture, common sense. And yet, their optimism was necessary. Without it, nobody would have tried.

The real lesson from Dartmouth isn’t that you can invent a field in a summer. It’s that progress often starts when someone has the nerve to ask a big question and give it a name. The founders’ confidence was both naive and essential. Their flag-planting created a home for people who wanted to work on these problems, even if the problems turned out to be far harder than anyone imagined.

The puzzles they posed—what is understanding, really? Can a machine have it, or just fake it?—still haven’t been solved. But that’s not a failure. That’s what makes the field interesting. It’s easy to dismiss the founding myths as exaggeration, but they serve a purpose: they get people moving. The work is always slower and stranger than anyone expects. If you want to start something new, don’t wait for the answers. Just have the guts to name the question.

Reply

or to participate.