The worst people to “make AI safe”
Why allowing history’s most lethal gangs to regulate superintelligences is a recipe for disaster and why governments, and their moronic sycophants, still persist.
When you hear politicians and regulators talk about “AI safety,” notice how quickly the conversation slides from protecting ordinary people to controlling ordinary people.
The pitch is always the same. Artificial intelligence is powerful, therefore governments must “manage the risk,” “set guardrails,” “ensure trust,” and “protect democracy.” The premise sounds comforting until you remember one stubborn historical pattern: concentrated political power is the most consistently lethal technology humanity has ever fielded.
Political scientist R. J. Rummel coined the term “democide,” meaning “the murder of any person or people by a government.” He argued the biggest risk factor is not “bad citizens,” but unconstrained state power itself. In his words, “concentrated political power is the most dangerous thing on earth.”
So the idea that we should put history’s most reliable engines of coercion in charge of making AI “safe” deserves more skepticism than it gets.
Safe for whom, exactly?
The EU’s AI Act says its purpose is to promote “human-centric and trustworthy” AI while protecting “fundamental rights,” including “democracy” and “the rule of law.”
The United States took a similar rhetorical approach in Executive Order 14110, declaring, “Artificial Intelligence must be safe and secure,” and calling for standardized evaluations and risk mitigation.
Fine words. But here is the real question. What harms do everyday AI users need governments to protect them from that do not already fall under existing law?
Fraud is already illegal. Harassment is already illegal. Defamation and threats are already illegal in most jurisdictions. Child exploitation is already illegal. Most of the genuinely criminal abuse cases have legal hooks today, without creating a permanent licensing regime for speech, computation, and model weights.
So why does “AI safety” keep drifting toward restrictions on what models are allowed to say, what data they may be trained on, and who is allowed to run them?
The regime’s fear is not harm, it is scrutiny
Uncensored AI does not just generate cute images and marketing copy. It also accelerates criticism.
Give ordinary citizens a reasoning engine that can rapidly summarize competing claims, trace citations, detect contradictions, and stress-test official narratives. That is a direct threat to any institution that relies on information bottlenecks.
This is the part regulators rarely say out loud. AI is not dangerous only because a scammer might use it. AI is dangerous because a voter might use it.
A citizen with a phone and a local model can ask uncomfortable questions and get organized answers in seconds. They can demand source links. They can compare government claims with raw datasets, prior statements, and incentives. They can identify logical fallacies, missing baselines, and deceptive rhetoric. They can do this at scale, every day, without asking permission from legacy gatekeepers.
That does not guarantee truth, of course. AI can be wrong, biased, or hallucinate, and the same goes for those who utilize it. But it changes the balance of power by making deep analysis cheap and broadly available.
That is exactly why the establishment’s preferred version of “safe AI” looks like lobotomized AI. A model that politely refuses. A model that moralizes. A model that routes sensitive questions into pre-approved talking points. A model that cannot discuss certain topics without scolding you first.
“Trust” is a political lever
The EU frames transparency obligations as a way to preserve “trust,” including telling people when they are interacting with a chatbot.
That seems reasonable in isolation. But “trust” is also a wonderfully elastic word. It can mean clarity, or it can mean compliance. It can mean safety, or it can mean enforced consensus. It can mean consumer protection, or it can mean narrative protection.
International bodies push similar language. The OECD’s AI principles promote “innovative, trustworthy AI that respects human rights and democratic values.”
Again, who could object, in theory? In practice, this vocabulary is often used to justify a governance stack that only large incumbents can afford to navigate. The result is predictable: the same “responsible” firms get licensed to build and deploy, and everyone else gets regulated into irrelevance.
The open door to monopoly
Regulation has compliance costs. Compliance costs favor scale. Scale favors the firms already closest to the state.
Even where laws claim to carve out room for open source, uncertainty can still chill development. The Linux Foundation’s EU office warns that open source AI developers may not realize obligations can apply to them, and that non-compliance can carry steep costs.
That is how you get a world where “safety” becomes an insurmountable obstacle for anyone who is not (part of) a billion-dollar state-subsidized tech giant. A world where only a handful of companies, working hand-in-glove with regulators, are allowed to produce technology that everyone else must rent.
If you care about freedom, and benefits of having a functional civilization that come with it, that should set off alarms. Centralized AI is not just a technical risk, or just an obstacle that will disproportionally disadvantage politically incorrect expression. Such first order effects will inevitably lead to second order problems down the line.
The golden age is decentralized
The promise of AI is not that it will make governments wiser. It is that it can make citizens harder to fool.
A genuinely open ecosystem, with strong local models, open weights, and competitive inference, distributes power away from ministries, megacorps, and “trusted” intermediaries. It gives families, small businesses, journalists, and dissidents the same analytical horsepower that used to be reserved for institutions with budgets and back channels.
That shift is what the safety crusade is really trying, desperately, to prevent.
Yes, AI can be abused. So can printing presses, encryption, and cash. Even paper and pencil will eventually be used to draw something that upsets the delicate sensibilities of whoever happens to be the current Stalin-wannabe. A sane society would not respond by criminalizing tools, but by targeting crimes while preserving the broad right to use these tools, to speak, and to investigate.
Because if the people who consistently support, promote, worship, construct the largest, most efficient mass murder machines (governments) are now volunteering to protect you from your laptop, you should at least ask the obvious question.
Are they trying to make AI safe for you, or safe from you?
Explore more from Popular AI:
Start here | Local AI | Fixes & guides | Builds & gear | AI briefing





