
A note before you read:This is the most personal thing I’ve written since “The World Is on Fire.” It connects my childhood in Iran, my career building AI for banks, and a book that forced me to see the thread between them. If you’re here for a technical architecture post, I’ll have one of those next week. This one is about something I can’t stop thinking about.
The Pattern I Recognized Too Late
I was six years old the first time I understood how control actually works.
It wasn’t a dramatic moment. No soldiers at the door. No tanks in the street. It was a textbook (I am homeschooled however we used textbooks often). A school textbook that had been rewritten — quietly, without announcement — to remove a chapter about pre-Islamic Persian history. One semester it was there. The next, it wasn’t. Nobody explained. Nobody protested. The chapter simply ceased to exist, and we were expected to not notice.
That was my first lesson in how systems control people. Not through violence — that comes later, and only when necessary. Through the quiet elimination of alternatives. Through making the approved version of reality so complete, so seamless, so default, that the idea of questioning it doesn’t even occur to you.
I grew up inside that system. I left Iran very early of childhood. I built a career in technology. I became the Head of AI at a global banking platform company. I’ve spent the last two decades building systems designed to serve people.
And somewhere in the last year — reading Thomas Sowell’s Intellectuals and Society while watching the Islamic Republic finally collapse under the weight of Operation Epic Fury — I recognized a pattern that made me deeply, physically uncomfortable.
The pattern was familiar. Not because I’d read about it. Because I’d lived inside it.
The AI industry is running the dictator’s playbook.
Not deliberately. Not maliciously. Not with tanks and secret police. But structurally — in the way that matters — the mechanics are identical. And because I’ve seen what those mechanics produce when they run to completion, I feel an obligation to say it clearly, before it’s too late to say it politely.
Sowell’s Knife
Let me start with the book, because the book is what gave me the vocabulary.
Thomas Sowell’s Intellectuals and Society makes one argument, and it makes it with the precision of a scalpel: the people whose primary output is ideas operate in a unique and dangerous economic position. They are structurally insulated from the consequences of being wrong.
A surgeon who is wrong loses a patient. A bridge engineer who is wrong watches concrete fall. But an intellectual who champions a catastrophic idea? They write another paper. They give another keynote. They move on to the next framework, the next theory, the next confident prediction — while the wreckage of the previous one is cleaned up by someone else, somewhere else, usually in silence.
Sowell calls this the “vision of the anointed” — the self-reinforcing belief among a class of thinkers that their superior insight entitles them to make decisions for others. Not because they’ve earned it through results, but because they’ve credentialed themselves through consensus. The anointed don’t need to be right. They just need to agree with each other.
When I read that, I didn’t think about academia. I didn’t think about media pundits or policy think tanks.
I thought about the AI industry.
And then I thought about the Islamic Republic.
And I realized, with a clarity that troubled me for days, that the structural architecture was the same.
The Four Stages, Revisited
In “The World Is on Fire,” I described the four stages by which a dictator consolidates power:
- The Savior Narrative — a crisis, real or manufactured, that creates demand for a singular solution.
- The Silencing of Alternatives — the quiet, incremental elimination of competing voices, always justified by urgency.
- The Ideological Moat — wrapping authority in something untouchable, so that criticism becomes blasphemy.
- The Economy of Loyalty — building a class of beneficiaries whose wealth, status, and survival depends on the system’s continuation.
I wrote those stages about Khamenei. About Stalin. About Mao. About political dictators or CEOs.
I now believe they describe, with uncomfortable accuracy, the structural evolution of the AI industry.
Let me show you what I mean.
Stage One: The Savior Narrative
Every dictator begins with a real problem.
Khomeini didn’t invent Iranian suffering — the Shah’s regime was genuinely repressive, genuinely corrupt, genuinely propped up by foreign powers. The revolution had legitimate grievances at its core. What made it dangerous was not the grievance. It was the claim that one framework, controlled by one class of people, was the only possible answer.
Now look at AI.
The problems AI claims to solve are real. Inefficiency is real. Information overload is real. The inability of institutions to personalize at scale — I’ve spent my career fighting that exact problem in banking. The grievance is legitimate.
But the savior narrative that has been constructed around AI is not a description of what the technology can do. It is an ideology — a comprehensive worldview that says: AI will solve everything, AI is inevitable, and the only question is how fast you adopt it.
This narrative has been so thoroughly absorbed that questioning it — not questioning specific applications, but questioning the totality of the claim — marks you as a Luddite, a dinosaur, someone who “doesn’t get it.”
When the savior narrative becomes untouchable, you are already in Stage One.
I’ve sat in boardrooms across Asia or Europe where the mandate is not “evaluate whether AI solves this problem” but “put AI into this process.” The question has been pre-answered. The only discussion is implementation. That is not technology adoption. That is faith.
Stage Two: The Silencing of Alternatives
In Iran, alternatives weren’t banned overnight. They were made irrelevant.
The newspapers that asked hard questions didn’t get raided on Day One. They got starved — of advertising, of access, of the oxygen of official engagement. The professors who dissented didn’t get arrested immediately. They got passed over, marginalized, made to understand that their career advancement was inversely proportional to their willingness to challenge the narrative.
The AI industry does this with extraordinary efficiency, and almost entirely without malice.
It does it through funding. The venture capital ecosystem does not fund AI skepticism. It does not fund research into what AI should not do. It funds acceleration. It funds scale. It funds the next model, the next capability, the next leap. The incentive structure of the entire industry is oriented toward more — and anyone who argues for less, or slower, or differently, is simply not in the conversation. Not because they’ve been silenced. Because the room was built without a chair for them.
It does it through hiring. The talent pipeline in AI selects for believers. Not cultists — most of them are thoughtful, serious people. But the filter is real. If you walk into an AI lab and say “I think we should slow down and understand what we’ve built before we build the next thing,” you are not fired. You are simply not interesting. The energy, the funding, the status, the opportunities — they flow toward the builders, not the questioners.
It does it through language. The vocabulary of the AI industry has been carefully — if unconsciously — constructed to make dissent sound foolish. “AI safety” is real and important, but it has also become the only approved channel for criticism. If your concern doesn’t fit inside the safety framework — if you’re worried about economic displacement, about institutional dependency, about the concentration of cognitive authority in a handful of companies — you don’t have a word for what you’re worried about. And without the word, you don’t have the conversation.
Sowell would recognize this instantly. He described how intellectual establishments don’t suppress dissent through force. They suppress it through consensus — through making the approved position so dominant that alternatives become socially expensive to hold. You don’t need censorship when you have a culture that treats the dissenter as simply uninformed.
The most effective silencing doesn’t feel like silencing. It feels like being left behind.
Stage Three: The Ideological Moat
Khamenei wrapped himself in Islam. Not faith — institutional Islam. A version of religion so thoroughly fused with political power that criticizing the government was indistinguishable from criticizing God.
The AI industry has its own sacred doctrine: progress.
To question AI is to question progress. To resist adoption is to resist the future. To argue that some problems should not be solved by AI — or that some solutions are worse than the problems they address — is to position yourself on the wrong side of history.
This is not a metaphor. This is the actual language used in board meetings, investor pitches, and industry conferences. I have heard, with my own ears, a senior executive describe a cautious approach to AI deployment as “choosing to be Kodak.” Not as analysis — as warning. As moral judgment. As a way of saying: the future has a direction, and you are either with it or you are roadkill.
When questioning a technology becomes morally equivalent to failure, you have built an ideological moat.
And here is where Sowell’s framework becomes essential. The ideological moat persists because the people who built it — the researchers, the thought leaders, the founders, the venture capitalists — do not bear the cost of being wrong.
If AI displaces ten million jobs, the people who championed the displacement will not be among the displaced. If AI creates a generation of institutions that cannot function without their vendor’s model, the architects of that dependency will have moved on to the next thing. If a bank’s AI-driven lending algorithm systematically disadvantages a community, the engineer who built it will not live in that community.
The feedback loop is broken. And when the feedback loop is broken, the ideology becomes self-reinforcing — because there is no mechanism to update it with reality.
Sowell saw this pattern in every intellectual movement he studied: the people who pay the least for being wrong are the people most confident in their rightness.
I saw the same pattern in Iran. The clerics who designed the economic system — the sanctions-busting, Revolutionary Guard-enriching, citizen-impoverishing system — lived behind walls. Their children studied abroad. Their wealth was denominated in currencies they publicly denounced. They never experienced the system they imposed.
The architect who never lives in the building has no incentive to fix the plumbing.
Stage Four: The Economy of Loyalty
This is the stage that most people miss entirely, and it’s the one that matters most.
Khamenei didn’t survive 35 years because Iranians believed in him. He survived because the Revolutionary Guards, the bonyads, the clerical establishment — the entire economic superstructure of the Islamic Republic — had their wealth, their status, and their survival tied to his continuation. They didn’t support the regime out of faith. They supported it because the regime made them rich.
Now look at the AI ecosystem.
Cloud providers are building an economy of dependency that would make the Revolutionary Guards nod with recognition. The architecture is elegant: start with free tiers, build integration depth, create switching costs, and then — once the institution cannot function without you — monetize. Every API call, every token, every inference is a toll on a road you didn’t build but can no longer avoid.
I’ve written about this before as the “Cloud Tax.” But Sowell’s framework helps me see it more clearly. It’s not just a pricing problem. It’s a loyalty economy. The cloud providers, the model vendors, the consulting firms, the integration partners — they form a class of beneficiaries whose entire business model depends on the continued expansion of AI dependency. They are not going to tell you to slow down. They are not going to tell you that your problem doesn’t need AI. They are not going to recommend the simpler, cheaper, less AI-intensive solution.
Not because they are evil. Because they are rational. Their incentives are aligned with expansion, not with your optimization.
The economy of loyalty doesn’t require conspiracy. It only requires aligned incentives among the beneficiaries (like a good esop).
And the CEOs who signed the contracts, the boards who approved the budgets, the CIOs who staked their careers on the transformation — they become part of the loyalty economy too. They cannot afford for AI to underperform, because their own credibility is invested. So they report success selectively, they measure what flatters, and they quietly suppress the metrics that don’t.
I have seen this. Personally. In banking. In the very boardrooms where I present.
It is not malice. It is architecture. And it is exactly how Khamenei’s system worked for 35 years.
What My Country Taught Me That Silicon Valley Hasn’t Learned
I want to be careful here. I am not saying that Sam Altman is Khamenei. I am not saying that Google is the Revolutionary Guard. I am not making a moral equivalence between building a large language model and running a theocratic dictatorship.
What I am saying is that the structural mechanics — the way power consolidates, the way dissent is marginalized, the way dependency is manufactured, the way an ideology becomes unfalsifiable — are the same mechanics. They operate at different scales, with different consequences, and with different levels of human suffering. But the architecture is identical.
And the reason I can see it is that I grew up inside one version of it and spent my career building inside another.
Iran taught me three things that I carry into every AI project, every board meeting, every architectural decision:
First: the system that cannot be questioned is the system that will eventually fail catastrophically. The Islamic Republic could not be questioned — and so it couldn’t adapt. It couldn’t update. It couldn’t incorporate feedback from the 85 million people living inside it. It ran on ideology instead of information, and when reality finally diverged far enough from the ideology, the whole thing shattered. If your AI strategy cannot be questioned — if the decision to adopt is treated as settled, if the only discussion is how much and how fast — you are building the same brittle architecture.
Second: the class of people who design the system must live inside the system. This is Sowell’s lesson, but I learned it in Ilam (my Kurdish village) long before I read it in English. The clerics lived behind walls. The engineers who build AI live in a different economic reality than the people whose jobs, credit scores, and financial lives are shaped by the models they ship. If the builders don’t bear the consequences, the building will eventually collapse on the people inside it.
Third: institutional redundancy is the only protection against capture. I wrote in “World Is on Fire” that the antidote to dictatorship is not a better opposition — it’s institutional redundancy. Courts that no single actor controls. Media that no single actor owns. The same principle applies to AI: the antidote to AI capture is not better AI. It is architectural diversity — multiple models, multiple vendors, multiple approaches, edge computing that keeps intelligence distributed rather than concentrated, data governance that prevents any single system from becoming the only version of truth.
This is why I’ve spent the last year building edge-first architectures, hybrid AI systems, and decentralized data meshes. Not because I’m anti-cloud or anti-AI. Because I’m anti-concentration. Because I’ve seen what concentration produces when it runs long enough without accountability.
The Lesson for CEOs, Founders, and AI Workers
I’ll keep this section short, because I’ve learned that the people who need these lessons the most are the ones with the least patience for long essays.
If you’re a CEO: Ask yourself the question Sowell taught me to ask — what happens in my organization to the person who says AI is the wrong solution? If the answer is “nothing good,” you’ve built an ideological moat. Not around a dictator. Around a technology. The structural effect is the same: you’ve eliminated the feedback mechanism that keeps your strategy honest.
If you’re a founder: You are almost certainly inside Stage One without knowing it. You have a genuine problem. You have a genuine solution. And you have surrounded yourself with people who agree that your solution is the right one — because the funding, the hiring, and the entire ecosystem selects for agreement. Build the adversarial function into your team before you need it. The company that can challenge its own thesis is the company that survives contact with reality.
If you’re an AI engineer, researcher, or product manager: You are, whether you like it or not, a member of Sowell’s intellectual class. Your primary output is ideas — models, architectures, alignment philosophies. And you are structurally insulated from the consequences of those ideas in a way that should keep you awake at night. The model ships. The paper gets cited. The startup gets funded. Whether the technology actually improves or damages the lives it touches — that feedback arrives slowly, ambiguously, and usually to someone else’s desk. You owe it to yourself and to the people downstream of your work to actively seek that feedback, to go find the consequences, to refuse the comfort of not knowing.
If you’re anyone: The most dangerous sentence in any language is “This time it’s different.” It is never different. The mechanics of power, dependency, and ideological capture are as old as civilization. They don’t require bad people. They only require good people who stop asking hard questions because asking is expensive and compliance is free.
The Through Line
I started this piece with a textbook.
A homeschool boy, six years old, noticing that a chapter had been quietly removed. Not banned. Not burned. Just… absent. Made irrelevant by the simple act of not including it.
Thirty-five years later, I sit in meetings where entire categories of questions have been quietly removed from the agenda. Not banned. Not forbidden. Just… absent. Made irrelevant by the simple act of not including them.
Should we use AI for this? is no longer asked. Only how. Who bears the cost if this fails? is no longer asked. Only how fast can we scale? What are we not seeing? is no longer asked. Because the people who would ask it are no longer in the room.
The Islamic Republic lasted 46 years because it was architecturally brilliant at eliminating alternatives, manufacturing dependency, and making its ideology sacred. It fell, in the end, because reality is not ideological. Reality doesn’t care about your narrative. It only cares about what works.
Every system that insulates itself from consequences eventually meets a consequence it cannot survive.
I grew up inside one such system. I will not build another.
The dictators of the next century will not look like Khamenei or Stalin. They will look like dashboards. And the people who built them will be genuinely surprised when someone calls them tyrants — because they never intended to be. They just forgot to ask who was living inside the system they designed.
Thank you for reading something different. Again.
If this resonated, you might also want to read “The World Is on Fire. Here’s What I’ve Been Thinking.” — the essay that started this thread.


