Why Constraints Matter

People tend to think progress comes from better tools. More memory, faster processors, bigger everything. It’s intuitive: if you want to go faster, get a faster car. But in practice, the most interesting work often comes not from abundance, but from constraint.

This isn’t just a romantic notion about “doing more with less.” It’s practical. The earliest AI researchers had to work with computers that, by today’s standards, were toys. They wrote code for machines with less RAM than a modern toaster. But because they had so little, they had to be clever. They invented algorithms that squeezed the most out of every byte. They had no choice but to understand their problems deeply, because they couldn’t afford to brute-force their way through.

Fast forward to now. If you want to train a neural network, you can buy a graphics card with more compute than the entire Apollo program. The hard part isn’t finding power—it’s choosing which of a dozen GPUs to buy. The temptation is to throw resources at problems, to keep stacking hardware until the problem yields. But oddly, this often leads to worse solutions. When it’s easy to try things, you don’t have to think as hard. If you can brute-force it, you probably will.

This isn’t a new pattern. It shows up everywhere. The best writers often have the smallest vocabularies. The best programmers write the shortest code. Limitations force you to focus on what matters. They force you to understand the core of the problem, rather than papering over it with resources.

It’s easy to believe that more power means more progress. But if you look at how real progress happens, it’s usually when someone is forced to be clever. The constraint is what makes the solution interesting. If you want to do good work, try working with less. You might be surprised by how much it helps.

Reply

or to participate.