The Judgment Gap:
Why the AI Economy Still Needs People Who Understand the Work
“Efficiency is doing things right; effectiveness is doing the right things.” — Peter Drucker
By Bryan J. Kaus
A lot of the discussion around AI is too shallow.
Not because the technology is unimpressive.
It is.
Not because the disruption is imaginary.
It isn’t.
But because too much of the conversation begins with automation and ends with labor reduction, as though the central question is simply how many tasks can be compressed, how many people can be removed, and how quickly management can turn capability into cost savings.
That is not the only question.
And in many cases, it is not even the right one.
The more serious question is this:
What happens when an institution starts outsourcing not merely work, but understanding?
That, to me, is where the real issue begins.
The Principle Being Ignored
Drucker’s distinction matters here because AI is a wonderful efficiency machine.
It can summarize, draft, synthesize, classify, compare, search, and model. It can take friction out of workflows that have been clumsy for years. It can help smaller teams operate with more reach. It can remove low-value repetition. It can make organizations faster.
But speed is not strategy.
And capability is not the same thing as throughput.
What many leaders seem to be missing is that AI can help an institution do things right while also making it easier to stop asking whether it is doing the right things at all.
That is where trouble begins.
McKinsey has argued that by 2030, activities accounting for up to 30% of hours worked in the U.S. economy could be automated, though it also notes that many higher-skill roles will be enhanced rather than simply eliminated.
Anthropic’s own labor-market work tells a similar but more textured story. It finds real exposure in white-collar occupations, but no systematic increase in unemployment for highly exposed workers since late 2022, even as hiring for younger workers appears to have slowed in some exposed fields.
In other words: the capability story is real, but the labor story is more uneven and more contingent than the loudest predictions imply.
That should change what we are actually debating.
Not whether AI is powerful.
It is.
But whether the people implementing it are thinking deeply enough about judgment, resilience, and institutional competence.
Where This Became Real to Me
Years ago, I started building models to automate the economics behind fuel deals I was doing. I wanted to be precise. Post-audit. Dynamically respond to the market. Eliminate inefficiency and missed value.
To me, the instinct was obvious.
Pull in the variables. Structure the logic. Improve consistency. Reduce noise. Get to a better answer faster.
That is still how I think.
If a system can help a business see more clearly and decide more intelligently, it should be built.
But someone senior who had been doing the job for 30 years asked a question that stayed with me:
What happens if the people using it do not understand what it is telling them?
That was not theoretical.
We saw it firsthand.
In our branded business, automation made the economics look cleaner and more standardized. But in too many cases, the people closest to the deals did not really understand the underlying math (worse still - the math in the automated model didn’t work right). Meanwhile, economics in the branded channel were eroding. I ended up doing models for many of our branded reps because some of them did not understand the economics well enough to protect value. More than once, that kept us from signing ten-year deals with returns that were roughly negative 35%… sometimes worse.
The system could produce an answer.
The field could produce activity.
But if the people doing the deals did not understand the commercial engine underneath them, the company was not becoming smarter.
It was automating value leakage.
That, to me, is the caution.
A useful system should not merely generate output. It should strengthen the capability of the people using it. It should sharpen judgment, not tempt leaders to bypass it.
The Seduction of the Clean Answer
One reason AI is so compelling is that it flatters a very modern managerial instinct.
It promises speed without friction.
Scale without proportional labor.
Output without delay.
Apparent certainty without the full burden of wrestling with ambiguity.
And in many cases, the gains are real. That is why this is not an anti-AI argument. It is an anti-naivete argument.
The risk is not that AI produces nothing of value.
The risk is that leaders start mistaking a cleaner answer for a better one.
A sharper dashboard is not the same thing as institutional understanding.
A faster recommendation is not the same thing as sound judgment.
A polished output is not the same thing as knowing what to do when the assumptions underneath it fail.
That is the part I think many people — especially those saturated in the service economy, in software, in SaaS, in workflow consulting, in the abstraction layer of modern business — still underestimate.
Fukushima and the Limits of the Automated Layer
I was recently watching the new Fukushima documentary, and one of the details that stayed with me was not just the scale of the failure, but the nature of the response.
When Fukushima Daiichi lost AC and DC power, operators lost critical instrumentation and control capability. TEPCO’s own account says batteries were carried from employees’ cars into the control room and connected so operators could open safety relief valves and depressurize the reactor pressure vessel. A National Academies review likewise noted that the loss of AC and DC power shut down key monitoring instrumentation and emphasizes that robust, diverse monitoring and loss-of-power response capability are essential.
That is an extreme case, obviously.
But the lesson is not confined to nuclear plants.
When the automated layer fails, only underlying competence gives you options.
People had to know what they were looking at.
They had to know how the system worked.
They had to improvise under conditions the normal architecture was not designed to handle.
That is not a romantic argument for heroic operators. It is a practical argument for competence. And a lesson that can’t afford to be ignored.
Because if your model of management is that the system runs itself, the moment you lose visibility, power, connectivity, or automation, you are no longer managing a business.
You are watching one drift from the outside (like the reactions in the reactors at Fukushima).
That is why I worry when AI is framed as though it reduces the need for deep operational knowledge. In reality, it raises the premium on it.
The Physical World Still Matters
This is another place where the AI conversation often drifts into fantasy.
AI does not hover above reality.
It sits on power grids, transmission constraints, substations, data centers, cooling systems, semiconductor supply chains, fiber networks, and cloud infrastructure. The International Energy Agency says electricity generation to supply data centers is projected to grow from 460 TWh in 2024 to over 1,000 TWh by 2030 in its base case. Reuters reported this week that U.S. power consumption is expected to hit record highs again in 2026 and 2027 as demand rises from AI, crypto, and broader electrification.
So no, the rollout will not be frictionless.
There will be infrastructure constraints.
There will be execution bottlenecks.
There will be outages, cyber risks, and geopolitical interruptions.
There will be sectors that move quickly and others that do not.
And leaders who build their institutions as though the software layer is the whole system are setting themselves up for disappointment.
Or worse.
The View From Tech Is Not the Whole Economy
Another distortion is that many of the loudest voices on AI come from the very sectors most predisposed to see the world as software.
If you live in consulting, SaaS, enterprise workflow, digital services, or code-heavy knowledge work, then yes — much of the economy can look radically automatable.
But that is not the whole economy.
McKinsey’s own work suggests that office support, customer service, and some routine knowledge-work categories are likely to see the biggest automation effects, while many other categories are more likely to be augmented than erased. Anthropic’s latest data similarly shows much higher exposure in computing, administrative, and customer-facing information roles than in physical occupations.
That tracks with common sense.
You are not going to automate away every café, landscaping company, maintenance contractor, industrial field operation, hospitality venue, or hands-on service business because a language model improved.
In fact, some of the more resilient sectors may prove to be the so-called boring ones: practical, physical, operationally grounded businesses that still require real-world execution and human judgment in context.
Ironically, parts of the technology and services economy may be the ones most exposed to compression precisely because the product is closer to the medium being disrupted.
The Knowledge-Capture Trap
There is another angle here that deserves more attention.
The Wall Street Journal recently noted that enterprise AI systems are increasingly being used to capture employee know-how and work processes, embedding expertise into systems in ways that can make workers more replaceable over time. In separate reporting, the Journal also noted a widening gap between executive claims about AI-driven efficiency and what many employees say they are actually experiencing on the ground.
That matters.
Because AI is not only a labor-saving tool.
It is also a knowledge-capture tool.
That can be useful. It can preserve process memory, reduce dependency on single points of failure, and scale better practices.
But it can also become a quiet mechanism for deskilling if leaders are not careful.
If younger professionals never learn the underlying work because the first pass is always outsourced to a system, then the organization may become more efficient in the short term while becoming less capable in the long term.
What you get is not resilience.
It is cannibalization.
What Serious Leaders Should Do
The leaders who win in this environment will not be the ones who simply chase the shiny object fastest.
They will be the ones who actually know their business.
And that requirement gets more important over time, not less.
They will ask:
Where does AI genuinely improve throughput?
Where does it reduce low-value repetition?
Where must human review remain close to the decision?
What commercial, technical, or operating knowledge must stay inside the firm?
How do we build systems that enhance people rather than merely replace them?
How do we ensure that fewer people does not become less competence?
Because that is the real design challenge.
I do think many organizations will end up with leaner teams.
But leaner cannot mean hollower.
The winning model is not “humans or AI.”
It is a hybrid system where automation lifts productivity, highly capable people remain close to the work, and the institution still knows how its own engine functions when the software layer is under stress.
That is resilience.
That is stability.
That is what separates serious management from fashion.
The Point Taken:
The biggest risk in the AI economy is not that machines suddenly do everything.
It is that leaders become so enamored with speed, automation, and cost reduction that they start hollowing out the judgment their institutions still depend on.
AI can absolutely improve productivity.
It can absolutely make organizations faster.
It can absolutely help smaller teams do more.
But the winners will not be the ones that outsource understanding.
They will be the ones that build hybrid systems, preserve real competence, and use technology to enhance human capability rather than quietly replace the very judgment needed to run the business when reality refuses to behave.
Because the further you move into a world of intelligent systems, the more dangerous it becomes to have leaders and organizations that do not actually know how their own business works.
And the more valuable it becomes to have fewer, more capable people who do.



