The End of Middle Management
AI and the Managerial Counterrevolution
TLDR
LLMs and agents are the first coordination technology that eliminates the need for more managers: they make all data legible and collapse the process layer, breaking the Managerial Class’s monopoly on information, coordination, and judgment.
If you’re a manager or mid-career professional, move to the edges: set goals, catch misalignment, hold contextual knowledge agents can’t access, and be an editor accountable for output quality, not a coordinator assembling deliverables.
The Old Loop
A company operates in a feedback loop: set goals, make activity legible, sense, plan, allocate, execute, learn, adjust, repeat. The Managerial Class exists because planning is separate from execution, work is decomposed into measurable tasks, and judgment stays with management. James Burnham observed that managers capture control from owners because they control this infrastructure, the means of production, and their hold on the coordination layer itself becomes their power base.
Every coordination technology since the telegraph has expanded managerial reach: information speed, remote oversight, logistics, async coordination, cross-system integration. Each wave made larger, more complex organizations possible, and required more managers to run them.
LLMs and agents are the first coordination technology that lets principals operate the loop without expanding the Managerial Class.
We Shape Excel And Thereafter Excel Shapes Us
The spreadsheet abstracted evaluation from domain expertise: the performance and prospects of a meatpacking company, retailer, or data center could be understood with standardized metrics and could be modeled in minutes. ERPs replicated this internally, standardizing measurement, planning, and allocation into a single system. SaaS unbundled it into specialized modules, each useful in isolation but siloed overall.
Abstraction means discarding information until what remains is modelable: recordable, comparable, and administrable. James C. Scott called the state’s version of this legibility: simplified schemas imposed to make complex realities governable. Corporate tools do the same thing from the inside.
This used to be a bottleneck: manual codification depends on managers who decide what gets entered, what becomes visible, what stays dark. LLMs render almost all data legible and read all call transcripts, Slack threads, images, and emails. Nobody needs to decide what gets codified or manually codifies everything, thus rendering the job of a manager as codifier redundant.
The New Loop
While LLMs expand what principals can see, agents collapse the process layer built to act on the old, narrower legibility. Every SaaS workflow automation is an explicit specification of how process work should happen. These specifications exist because processes need standardization to scale and humans need training.
Agents don’t need any of that. They need a goal (the objective function), access to tools, and placement in the environment. They don’t need the organization to make itself legible to a tool. If your value proposition is knowing the process, translating between systems, or routing work to the right person, that work will be automated: agents can do this faster and without the overhead of onboarding, retention, or management. When exactly this becomes widespread depends on enterprise adoption rates and will vary by sector, but the direction is not in question. Process is the easier part, judgment looks harder to automate, but that’s already underway too.
The Codifier’s Curse
The AI research labs, via data labeling companies, are paying doctors, investment bankers, lawyers, etc. $200 to $600 per hour to decompose their expert judgment into discrete evaluation tasks. This is the Taylorization of higher-order cognitive work: an expert externalizes their tacit judgment into training data, each evaluation makes the agent incrementally better at the expert’s job, and eventually the expert has automated themself—this is the Codifier’s Curse.
Today’s labeling pipelines capture what the expert decided but not whether the decision led to a good outcome, so the resulting model optimizes the procedure rather than the result. Closed-loop vertical platforms close this gap: a legal AI tool that sees the full cycle from clause analysis through resolution stops mimicking expert procedure and starts learning what actually works. The labeling phase encodes the judgment; the closed loop refines it against outcomes.
Even though the need for expert judgment and evaluation is reducing fast, demand will persist at two boundaries: distributional shifts (the world changes and historical patterns stop holding) and genuine novelty (no precedent to learn from). This is where pattern matching hits a wall. But it’s plausible that agents will conduct research and build their own knowledge to make sense of novelty without needing previous data.
What This Means
The Managerial Class is losing its monopoly on the three things that justified its existence: information, coordination, and evaluation. Agentic AI reduces the costs of the principal-agent problem by replacing human agents with machine agents even if the Alignment Problem remains. The principals can go genuine Founder Mode: see without the managers’ filtering, act without their process, judge without their expertise, which means org charts will flatten.
The ratio of principals to managers will shift dramatically but it’s hard to say what will happen to the absolute number of managers needed because the number of principals could expand. Managerial work will move to the edges: navigating distributional shifts, handling novelty, managing the political substrate that agents can’t access, and setting goals. A misspecified goal with an increasingly capable agent destroys value faster than a bad manager ever could, like a trickster genie who gives you what you asked for but not what you want.
For professionals: move to the edges, specify the goals, catch misalignment, hold the contextual knowledge agents can’t access. Be an editor accountable for outcome quality, not a coordinator assembling deliverables. Your survival, or at least prolonging your survival, depends on objective setting and catching misalignment, not execution. For now, expertise and execution skill still matter for knowing when the agent is wrong, so being hands-on and working with the latest models and tools as an individual contributor will help. “Human in the loop” is cope, unless regulations mandate it: auditors, doctors, etc. The most famous example of man-machine symbiosis is outdated: human+machine chess teams beat standalone engines until 2008, but by 2017 machines were winning i.e. human input became a drag. Alternatively, work in a regulated industry where laws mandate a human in the loop and use that as your moat, or get your industry regulated.
For founders and executives: audit your org and automate roles that exist primarily to operate the old loop. The human jobs that create value are the ones where the person sets the objective function and handles the exceptions that agents surface but can’t resolve.
For investors: for the application layer, closing the goal-action-outcome loop is more defensible than simply being a vertical SaaS, because closed-loop vertical platforms will own the goal-action-outcome loop by compounding learning advantages and sitting at the frontier where novelty and distributional shifts first appear.


