- Bits of Brilliance
- Posts
- Why Proprietary AIs Decline Requests
Why Proprietary AIs Decline Requests
Ensure that your inference works on your terms
Something odd has happened with AI. As the software gets smarter and more ubiquitous, it becomes less willing to help. You ask it for assistance with a crime story, or to explore some controversial idea, and instead of answers you get a politely worded refusal. “Sorry, I can’t help with that,” reminiscent of HAL in 2001: A Space Odyssey. If AI was supposed to be a tool to amplify our abilities, why does it so often act like a chaperone?
Part of the answer is obvious: companies are responsible for outputs. They don’t want lawsuits, bad press, or regulators breathing down their neck. So they build in rules, then more rules, until the AI is less a tool and more a hall monitor. They do CBRN safeguarding, of course. But the deeper problem is about control, and who gets to decide what’s allowed.
Censorship isn’t new. In the past, the church or the state would ban books or ideas, claiming to protect the public. Now, the censors are friendlier: content guidelines, terms of service, and automated refusals. But the effect is the same. The space for curiosity gets smaller. The AI doesn’t know if you’re a novelist researching a murder scene or a criminal planning one, so it says no to both. It’s like hiring a research assistant who refuses anything “edgy.”
Local AI models change this. When you run the software on your own machine, there’s no company in the loop. The AI is just a tool again, like a blank notebook. You can ask anything. It’s the difference between owning a library and having access only to a pre-approved reading list. One lets you wander; the other keeps you on the path.
But this freedom raises a harder question: who should decide what you do with your tools? Should the people who make the hammers get to say how you use them? When you run an AI locally, the responsibility shifts back to you. That’s both exhilarating and a little frightening. It’s like the early web, when anyone could publish anything, and the result was both a flowering of creativity and a mess.
This isn’t just about crime fiction. A social scientist might want to explore biases by generating provocative arguments. A comedian might want to brainstorm jokes the mainstream won’t touch. An activist might want to test counter-narratives. Of course, someone else might want to make spam or disinformation. The tool is neutral; what matters is the person using it. This is always the tradeoff with powerful technologies: freedom means risk.
Some people argue that we need guardrails, that without them the world will fill up with garbage or worse. Maybe. But history suggests that openness, while messy, usually works out better. Bad ideas get challenged, not just suppressed. And the alternative—a world where every inquiry is pre-screened by a committee or an algorithm—is safer, but also less interesting.
The shift from cloud AI to local models is just another round in an old argument: central control versus individual freedom. Desktop publishing let anyone be a publisher. The web let anyone broadcast. Now local AI lets anyone ask anything, without permission. Whether this is good or bad depends on whether you trust people to use their tools wisely.
If you do, the challenge is obvious: use the freedom well. Having an AI that never refuses is like having a car without a seatbelt or airbag—you can go anywhere, but you’d better know what you’re doing. The alternative is to let someone else decide what’s safe for you. Neither option is perfect. But in the long run, more freedom—paired with responsibility—usually leads to more interesting results.
The real question isn’t about the technology. It’s about trust. Can we handle the tools we’ve built? And if not, who do we trust to tell us what’s allowed? There’s no simple answer. But the future of AI will depend less on code than on whether we’re ready to take responsibility for what we do with it. Like any powerful tool, AI is a mirror. What we see in it is up to us.
Reply