Everything Everywhere All at Once
Earnings season, AI capex, and the capital governance test every company now faces
By Bryan J. Kaus
"Show me the incentive and I will show you the outcome." — Charlie Munger
Earnings season has offered a lot of insight across the spectrum. I was looking through the outcomes last week and found a few things that made me want to look a little deeper.
For instance, Amazon’s Q1 2026 8-K disclosed something worth pausing on.
The company reported $16.8 billion in pre-tax gains in non-operating income from its investments in Anthropic. That is a paper markup, reflecting Anthropic’s most recent funding round at a $380 billion valuation. In the same window, Amazon agreed to invest up to $25 billion more in Anthropic, on top of $8 billion already committed, while Anthropic committed to spend more than $100 billion on Amazon Web Services (AWS)‘s technologies over the next 10 years, including up to five gigawatts of capacity on Amazon’s custom Trainium chips.
None of that is improper. Mark-to-market accounting on equity investments is standard. The strategic logic may be sound. Anthropic is a real company with real revenue.
But sit with the architecture for a moment. A strategic investor takes a larger stake in a customer. The customer commits to enormous future spend back to the strategic investor. The customer’s valuation rises. The strategic investor reports paper gains on its existing stake. Those gains contribute to near-term reported income.
That is the AI capital cycle in miniature. It is real. It is strategic. It is possibly excellent. And it is also a closed loop that is harder to evaluate cleanly than current valuations suggest.
The lesson here is not really about Amazon, or Anthropic, or AI specifically. It is about capital governance. How organizations behave when the upside feels unlimited and the urgency to act is intense. How investors price overlapping claims on the same future profit pool. How boards and management teams allocate capital when every initiative carries the rhetorical weight of inevitability.
AI is the current case study. Capital discipline is the lesson.
What the Earnings Cycle Actually Showed
The numbers from the past two weeks were not subtle.
Alphabet Inc. reported Q1 2026 revenue of $109.9 billion, with Google Cloud growing 63% to $20 billion and backlog nearly doubling sequentially to over $460 billion. Capex in the single quarter was $35.7 billion. Management raised full-year 2026 capex guidance to $180 billion to $190 billion and signaled that 2027 capex will “significantly increase” compared to 2026.
Amazon reported AWS growth of 28% and confirmed approximately $200 billion in capex for 2026.
Microsoft posted fiscal Q3 capex of $30.88 billion, up 84% year over year, with AI revenue surpassing a $37 billion annual run rate.
Meta raised its 2026 capex range to $125 billion to $145 billion, up $10 billion at both ends, and tapped the bond market for up to $25 billion in long-dated investment-grade debt to help fund the spend.
Aggregated, the four major U.S. hyperscalers are now expected to deploy close to $725 billion in capital expenditure in 2026, roughly double the prior year’s already-record level. Goldman Sachs noted that hyperscaler capex would need to reach $700 billion in 2026 to be in line with the peak of spending during the late 1990s telecom investment cycle.
AI has moved beyond product narrative. It is now a capital-intensity cycle, and once a technology becomes a capital-intensity cycle, the questions change. Not whether the technology is real, but who governs the spend. Who earns the return. Who owns the bottleneck. Who owns the customer. Who is converting capability into durable advantage. Who is simply spending to stay in the story.
When Everything Is Possible
The deeper lesson is not limited to AI. It applies any time the upside appears large enough that organizations begin treating possibility as priority. That happens more often than people admit.
Every operator has seen some version of it. A digital transformation that swallows the IT roadmap. A new commercial platform that becomes a five-year program nobody can shut down. A refinery optimization project that grows into an enterprise initiative. An ERP implementation that turns into a generational capital outlay. A pricing system rebuild. An acquisition integration that loses its scope. A supply-chain redesign that keeps adding workstreams.
The pattern is almost always the same. The strategic logic is sound. The opportunity is real. The business case is compelling. The urgency is high. The organization mobilizes.
Then the scope expands.
More stakeholders want in. More use cases get added. More vendors appear. More pilots launch. More capital is requested. More dependencies emerge. The project becomes too broad to manage cleanly and too important to challenge directly.
At that point, the risk is no longer the idea. The risk is governance. Capital projects fail less often because the original ambition was irrational. They fail more often because the organization loses control of scope, sequencing, accountability, and return discipline.
The question is not whether AI deserves capital. It does. The question is whether companies are governing AI capital like real capital, because that is what it is now.
Data centers, chips, power, cooling, cloud capacity, software architecture, model development, enterprise workflow redesign, training, cybersecurity, and change management are not innovation expenses dressed up as capex. They are real capital commitments with real opportunity cost. Every dollar allocated to AI is a dollar not allocated to safety, maintenance, customer service, R&D, balance-sheet flexibility, leadership development, or core execution.
When everything is possible, governance becomes strategy. Without it, possibility becomes sprawl, and sprawl is where capital goes to disappear.
Booking the Future
The architecture in that opening disclosure is not new in pattern.
Enron’s distortion was not optimism. It was the conversion of expected future contract economics into reported present value before the cash had unfolded. Future profit was pulled forward.
The current AI cycle is not doing that in the same accounting sense. The hyperscalers are real companies with real cash flow, real assets, and real customers. This is not Enron in any literal sense.
But valuation can produce a similar psychological effect. When markets reward companies today for future AI optionality, they are discounting a future that has not yet been competitively, operationally, or financially settled. That is normal in equity markets. The risk comes when that future gets counted too confidently, too broadly, or multiple times across overlapping participants.
Optionality is not cash flow. Partnerships are not profits. Capex is not return. Exposure is not advantage.
The investor’s job is to ask the basic governance questions without flinching. Is the demand organic? Is the customer economically independent? Would the customer buy the same capacity absent the investment? Is the contract profitable after the capex required to serve it? How concentrated is the exposure? What happens if funding markets tighten?
These are quality-of-earnings questions. They are also quality-of-governance questions.
The risk is not that any single arrangement is wrong. The risk is that markets begin counting the same future value multiple times, in multiple companies, on the assumption that the future has already arrived.
Capex Is the Test
The spend is now too large to treat casually. When a single hyperscaler raises 2026 capex guidance to $190 billion, the analysis changes. When the four largest hyperscalers approach $725 billion in aggregate, it changes again.
Pivotal Research projects Alphabet’s free cash flow to plummet almost 90% this year to $8.2 billion from $73.3 billion in 2025. Morgan Stanley analysts project Amazon free cash flow turning negative by almost $17 billion in 2026, while Bank of America analysts see a deficit of $28 billion. Barclays estimates Microsoft free cash flow will slide by 28% this year before popping back up in 2027. Meta, which raised the upper end of its capex range by $10 billion, just placed up to $25 billion in 40-year debt to help fund the spend. S&P assigned the new debt investment-grade and maintained its stable outlook, while noting that Meta’s massive investment in AI was “starting to affect credit metrics.”
That last phrase is worth re-reading.
This is no longer purely an equity-growth story. It is a balance-sheet story. A credit story. An infrastructure story. A capital-allocation story. That changes what the analysis has to do.
Is capex growing faster than incremental gross profit? Is AI revenue growing faster than depreciation? Are gross margins expanding or compressing? Is free cash flow improving or weakening? Are useful-life assumptions realistic? Are companies funding AI from operating cash flow or from debt?
These are not anti-innovation questions. They are capital-discipline questions. They are the questions any responsible operator asks before committing capital that cannot easily be redeployed if the original thesis is wrong.
GPUs Are Not Dams
Not all capex is created equal.
A refinery is a 40-year asset. A hydroelectric dam can run for a century. A bridge built well today is still moving traffic when our grandchildren are working. The depreciation curve on long-lived industrial capital is generous because the asset earns over decades.
AI capex does not work that way.
Two thirds of Microsoft’s capex this quarter went to short-lived assets, primarily GPUs and CPUs. These are not 20-year assets on a steady depreciation curve. They are short-lived hardware on a generation cycle, where today’s leading-edge chip becomes tomorrow’s stranded inventory faster than most balance sheets are designed to absorb.
That changes the math.
When you fund a $190 billion capex budget partly through long-dated debt, you are not just betting on the technology. You are betting that the return will materialize faster than the asset becomes obsolete. You are betting that compute pricing holds while supply expands. You are betting that customer demand keeps pace with your depreciation schedule.
The hyperscalers may well win that bet. Their cash flow, customer base, and pricing power give them real ability to absorb shorter useful lives. But the bet exists, it is large, and it deserves to be named.
Capital that depreciates fast must earn fast.
That is not a critique of AI. It is an accounting observation. And it is one of the cleanest tests of whether the spend is being governed like real capital.
The Operator’s Concern
There is also a management failure mode that maps cleanly onto this cycle.
If management teams come to believe the highest-return incremental dollar is always an AI dollar, capital and attention flow toward AI by default. In some places that is right. In many it is not. Without discipline, AI becomes the answer before the question is clear, and exposure becomes a substitute for strategy.
That is the governance version of the productivity-and-headcount mistake I have written about before in The AI Dividend. Reducing labor cost is not the same as expanding capability. Activity is not the same as advantage. The market often cannot tell the difference in the first quarter. It tells the difference eventually.
The Same Test, at Smaller Scale
The hyperscalers are the visible case because the numbers are public, the capex is enormous, and the earnings cycle forces transparency. But the more dangerous version of the same test is happening one ring out.
Every company in every sector is now running some version of this. The bank with an enterprise AI strategy. The manufacturer integrating AI into maintenance and quality. The retailer rebuilding its pricing engine. The insurer overhauling underwriting. The healthcare system deploying clinical decision support. The professional services firm trying to figure out what AI does to its hourly model.
These are real initiatives, and many will create real value. But they face the same governance risk as the hyperscaler buildout, with one important difference. The hyperscalers can absorb the spend. They have the balance sheets, the customer base, the cash flow, and the optionality to fund a multi-year capex cycle and survive substantial write-downs.
Most companies cannot.
For companies in non-tech sectors, AI investment is real capital allocation against a tighter budget, with less margin for error, in environments where the use cases are less mature, the vendors less vetted, and the internal capability to evaluate the spend is still being built.
The savings story is usually the entry point. AI promises to reduce headcount, accelerate cycles, lower error rates, and improve margins. Some of that is real. But the savings in the back half of the implementation often come with frontloaded costs that do not get fully accounted for. Consulting fees. Integration work. Software licenses. Change management. Training. Severance. Productivity loss during transition. And the lost institutional capability that walked out the door.
The mistake is treating cost shift as value creation. Cutting headcount to pay for software licenses is not a productivity gain. It is a rearrangement of the P&L. Reducing labor cost is not the same as growing the business. Replacing one expense line with another is not capital efficiency. It is rearrangement dressed up as transformation.
The discipline is to insist on the harder version of the question. What measurable improvement did we earn? Did cycle time actually fall? Did customer experience actually improve? Did margin expansion compound? Did the freed capacity get redeployed to growth, or did it simply leave? Are we delivering on the promise, or only promising it?
That is what manifesting the promise means. Real returns. Realized. Measured. Defended. Not announced. Not modeled. Not assumed.
Every company is now running its own AI cycle. The ones that come out of it stronger will be those that govern the spend like real capital, prove the returns, and resist the temptation to confuse activity with progress.
That discipline is not unique to AI. It never was. It is the discipline of effective management. Capital allocation. Resource stewardship. The honesty to do hard accounting on your own initiatives, and the willingness to admit when an experiment did not earn its keep.
AI is the test of the moment. The discipline is the test of the operator.
What Good Capital Governance Actually Does
The mature phase of this cycle will require more than excitement. It will require governance. Not bureaucracy. Not slow committees. Not paralysis dressed up as prudence. Speed is not the enemy of discipline. Lack of structure is.
Good capital governance does five things.
It frames investment as a portfolio. AI initiatives are not isolated experiments. They are capital projects that compete with one another and with everything else the company could fund. Treating them as a portfolio forces honest tradeoffs.
It uses real stage gates. Not bureaucratic checkpoints. Real ones. Define the problem. Prove the use case. Scale the deployment. Measure the return. Decide what comes next. Each gate is a chance to confirm or kill.
It commits to incremental gains. Systemic transformation usually fails. Compounded incremental improvement usually succeeds. The discipline of asking what the next measurable gain looks like protects against the sprawl of trying to AI-enable everything at once.
It assigns operational ownership. Every funded initiative needs a person whose career is tied to delivering the return. Without ownership, every project is everyone’s, which is to say no one’s.
It includes the courage to stop. The hardest discipline in any capital cycle is killing a project that has support, momentum, and political weight. Companies that cannot stop weak initiatives cannot allocate capital well, regardless of how many they start.
The questions a good governance process answers are practical. What problem are we solving? What is the expected return? What is the cost of being wrong? What has to be true for this to work? What are we not funding because we are funding this? When do we stop?
That last question is the one most organizations skip. In a hype cycle, companies are rewarded for starting things. The best operators are also willing to stop them. That is capital discipline.
Where the Discipline Lives for Investors
The investor’s posture follows from the same logic. The answer is not to avoid AI. That would be as undisciplined as buying everything with AI exposure. The better posture is selectivity, anchored to capital governance rather than narrative.
There is opportunity in bottleneck assets. Power, grid equipment, transformers, switchgear, cooling, specialized real estate, transmission, water. These are the constraints behind the spend, and the constraints often get priced last.
There is opportunity in disciplined hyperscalers that can show real return on capital, customer demand, utilization visibility, and the ability to slow or redirect spend if the cycle turns.
There is opportunity in workflow owners that already control the enterprise system of record and can embed AI into how work actually gets done.
There is opportunity in proprietary data owners. Models can commoditize. Unique, permissioned, high-quality data is harder to replicate.
And there is real opportunity in real-economy adopters that use AI to improve an already strong business. Energy. Industrials. Logistics. Manufacturing. Insurance. Healthcare. Distribution. The best AI investment may not be an AI company at all. It may be an industrial operator using AI to improve asset utilization, maintenance, forecasting, pricing, safety, or working capital.
That is where AI becomes operating leverage. And operating leverage paired with discipline is how durable value gets built.
A Responsible Capitalist View
Capitalism works when capital moves toward productive opportunity, when innovation is rewarded, when risk-taking is encouraged, and when weak projects are allowed to fail.
But capitalism also requires discipline. It requires distinguishing value creation from valuation expansion. It requires accepting that real technology can produce poor investment returns in parts of the first wave. It requires the humility to recognize that being right about a future does not guarantee being paid for it.
The responsible capitalist welcomes AI. The responsible capitalist also demands evidence.
Real returns. Real productivity. Real customer value. Real cash conversion. Real resilience. Real strategic clarity.
Not exposure. Not announcement. Not optionality. Not narrative.
There will always be money in the high-risk phase. Some operators are very good at trading volatility, ambiguity, and momentum. Some will do well in this cycle. But the broader system needs more than that. It needs durable companies, productive investment, resilient infrastructure, human capability, and disciplined capital allocation. It needs innovation without hollowing out the enterprise.
That is not anti-growth; it is grounded capitalism.
The Point Taken:
The lesson from this earnings cycle is not that AI should be dismissed. It is that AI has moved from a technology narrative into a capital-allocation cycle. That changes the standard of proof.
Markets no longer need only to know who is exposed to AI. They need to know who can govern the investment. Who can prioritize. Who can sequence. Who can measure return. Who can stop weak projects. Who can prevent sprawl. Who can turn capability into operating leverage. Who can avoid mistaking activity for strategy.
That discipline is not new, and it is not unique to AI. Every capital cycle in history has rewarded the operators who could distinguish exposure from return, activity from advantage, and announcing the future from earning it. AI is the current test of that distinction. It will not be the last.
The early winners in any cycle are often those with exposure.
The durable winners are those with discipline.
The right posture is not rejection. It is disciplined participation.
Fund the future.
Govern it like capital.
Manifest the return.
Do not book it before it is earned.



