Since World War II, the professional managerial class has dominated American institutions by asserting that organizational complexity is too great for any one person to fully comprehend. This peaked with the Biden presidency: a leader-less government run by intelligent, experienced, and highly credentialed wonks. It didn’t matter if the chief executive had “bad days,” had to be hidden from the public, and mumbled his way through those speeches that he couldn’t fob off.
Institutions were set up to allow sprawling network of competent experts to get tiny slivers of detail right, while remaining inscrutable to outsiders. No one in this system can see the whole picture. No one is expected to. The job of leadership is not to micromanage but to design the rules and incentives that allow the bureaucratic organism to function effectively despite its vastness.
This wave appears to have broken. A more radical view, equally technocratic and turbocharged by AI hype and a disgruntled electorate1, is getting its moment.
The Great Men are Back
Great Men Elon Musk and Donald Trump believe that complexity is not an inevitability—it’s a racket. The so-called experts are, at best, overcomplicating things to justify their own existence and, at worst, bourgeois parasites who thrive on opacity. Musk, the world’s richest man, has made his fortune by refusing to defer to experts, slashing white-collar jobs deemed critical, and setting targets that conventional wisdom considers impossible. Trump, for his part, has never been comfortable running an organization larger than a family business and doesn't trust anyone he can’t personally control.
This belief has a name: DOGE (Department of Government Efficiency), the meme-inspired ideological core of Trump’s second-term plan to dismantle large swaths of the federal bureaucracy. The central premise is simple: if something is too complex to be understood by a small group of brilliant individuals, then it is too complex to be governed at all. Rather than letting an organic network of career bureaucrats shape policy and execution, the DOGE model demands direct control—simple, hierarchical, and ruthless.
For a long time, this was a fringe view. The sheer complexity of modern institutions made the expertise model seem inevitable. But artificial intelligence is shifting the balance of power. Elon Musk, in particular, is betting that AI is the technological revolution that makes DOGE - and 21st century government itself - viable.
Clearcutting Bureaucracy
Historically, complexity necessitated middle management—the connective tissue that translated executive directives into action and filtered granular insights back up. This is why large corporations, governments, and particularly universities have ballooned with layers of supervisors, coordinators, and specialists.2
But GenAI may change that. For the last decade, companies have replaced middle management with APIs for simple logistical tasks—taxi dispatching, warehouse picking, and courier routing. Now, we are watching AI creep into more sophisticated domains: summarizing reports, parsing legalese, identifying inefficiencies, and even surfacing key strategic decisions.
Our research finds the biggest barrier to scaling is not employees—who are ready—but leaders, who are not steering fast enough.
~McKinsey Consulting, “AI in the Workplace”
Musk envisions a government - right now - where every civil servant reports not to a human bureaucrat but to an AI system. That system assesses a weekly five-bullet-point status report to determine the value of their work3. Instead of a chaotic network of semi-autonomous operators, DOGE imagines a streamlined hierarchy where an AI surfaces only the most critical decisions for a small cadre of ultra-competent decision-makers. The implication is staggering: the elimination of bureaucratic inertia, the obliteration of roles whose main function is maintenance rather than output, and the realization of a longstanding dream of autocrats and efficiency obsessives alike—the ability to command and control the machine with surgical precision.
The Richest LARPer in History

The problem is that, despite his public image, Musk is not an engineer and seldom a founder, though he plays one on TV4. His obsession with earning nerd-cred has repeatedly forced competent Software Engineers, Automotive Designers, Cave Divers and Online Gamers to endure his cringe-worthy misunderstandings of basic technical concepts5. For all his bluster about AI supremacy, his own ventures in the space lag behind OpenAI and Google DeepMind, and his predictions about technology timelines—whether self-driving cars or Mars colonization—have repeatedly proven wildly optimistic.
DOGE assumes that complexity is just a managerial smokescreen, an illusion perpetuated by entrenched interests. But what if Musk’s own understanding of organizations and technology is the illusion? Is he rich because he deeply understands the companies he acquires? Or because he is the world’s most extreme contrarian at a time when inflated equity valuations have disproportionately rewarded contrarians?
At Least We’ll Learn Something

The very premise that a single brilliant individual can replace the function of a vast, adaptive system is the kind of thinking that leads to spectacular failures, not revolutionary successes. In the equity markets, the rewards for being right when others are wrong are orders of magnitude higher than the costs of being wrong when others are right. The Boring Company, Neuralink, and SolarCity can all go belly-up without cancelling out Tesla and SpaceX’s contributions to Musk’s net worth. We may be about to discover whether the US government can survive the collapse of multiple cabinet-level departments.
One possibility is that the over-thinkers were right all along—that complexity is not a choice but a fundamental reality, and any attempt to impose simplification will result in catastrophe.6 Another is that Musk and Trump are right, and the institutions we thought were indispensable were merely self-justifying fictions. The answer, as always, will be decided by what happens when theory meets reality.
But one thing is certain: the age of sprawling, opaque bureaucracies is on notice. AI is turning organizations into glass boxes, whether their stewards are ready or not. History may show that Musk was too early on AI progress, and too credulous of “just run government like a business” memes, but others will follow.
The business community is watching and ready to learn. Soon, armies of McKinsey-trained management consultants wielding HBS-published case studies will provide hilariously unironic AI offerings that promise to unlock profits by clearing away the deadwood of fake expertise. I recommend getting in on the ground floor.
And, oddly enough, the neo-monarchist views of a guy called “Mencius Moldbug” who wants to establish a new Sun King whose Versailles is packed with Bay-Area Billionaires
The Pentagon long ago shot past the critical size where each additional billet necessitates more coordination than a single employee can accomplish. This means each new hire requires a new hire, leading to exponential bloat of paper-pushers responsible for coordinating other paper-pushers. A surprisingly small number of officers are responsible for all actual decision making.
Believing that bureaucrats will be brought to their knees by being forced to write braggadocious status reports shines some light on how much Musk understands about government operations.
In much the same way that Trump himself played a successful businessman on TV
Believing in Musk’s technical expertise at this point requires a form of Michael Crichton’s Gell-Mann amnesia:
“Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.
That is the Gell-Mann Amnesia effect. I'd point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all.”
As Venkatesh Rao puts it, “The deep failure in thinking lies in the mistaken assumption that thriving, successful and functional realities must necessarily be legible.”