
KEY POINTS
- Microsoft, Alphabet, Meta, and Amazon collectively lifted 2026 capital expenditure guidance to a combined $650 billion to $700 billion, the largest single-year build cycle in tech history.
- Google Cloud revenue grew 63% to roughly $20 billion and Microsoft's AI business now runs at a $37 billion annualized rate, finally giving the spend a credible revenue offset.
- Watch Nvidia's May 20 print and the second-tier networking and power names — Astera Labs, Vertiv, Eaton — for the next leg of the AI infrastructure trade.
The four largest US hyperscalers told investors on Wednesday night they will spend a combined $650 billion to $700 billion on artificial intelligence infrastructure in 2026, the most extreme single-year build cycle the technology industry has ever attempted. Alphabet raised its capex guide to $180 billion to $190 billion from a prior $175 billion to $185 billion. Meta lifted its 2026 range to $125 billion to $145 billion from $115 billion to $135 billion. Microsoft committed to roughly $190 billion for the year, and Amazon's full-year capex is now running near $200 billion, according to CNBC's earnings recap. The S&P 500 closed Thursday at a record 7,209.01, capping its best month since November 2020, with the Nasdaq up 15.3% in April alone.
The market's reaction was uneven, and that gap is the most important signal in the print. Alphabet shares jumped on the report. Microsoft fell roughly 4% after-hours and stayed soft into Thursday's session. Meta and Amazon traded in line. The split tells you exactly how investors are scoring the spending: the names showing AI revenue keeping pace with AI capex are getting paid, and the names that look like they are still building ahead of demand are being put in the penalty box.
The Capex-to-Revenue Math Has Shifted
For most of 2024 and 2025, the bear case on hyperscaler AI was that the capex line was racing ahead of revenue. That gap is now closing in real numbers. Google Cloud revenue grew 63% year over year to roughly $20 billion in the quarter, with operating margin expansion of more than 800 basis points. Sundar Pichai told analysts the unit is now consistently capacity-constrained on TPU v6 inventory and that the new Axion CPU plus Ironwood TPU configuration is shipping to customers like Anthropic, Salesforce, and a "top-three" Wall Street bank. Microsoft's AI business — defined as Azure AI services plus Copilot revenue across Office, GitHub, Dynamics, and Security — now runs at a $37 billion annualized rate, up 123% year over year. CFO Amy Hood said the unit moved from a roughly $13 billion run rate to $37 billion in twelve months, as recapped by Fortune.
Meta is the harder read. Susan Li reiterated that the capex bump is funding Llama 5 training infrastructure, the new Reality Labs Orion datacenter campus in Texas, and inference capacity for the Meta AI assistant, which crossed 1 billion monthly active users in March. The company did not give a hard number on AI revenue, only that ad targeting improvements driven by the Andromeda model were responsible for "a meaningful share" of the 21% ad revenue growth in the quarter. That ambiguity is what cost the stock its overnight pop.
Amazon told a more measured story. Andy Jassy said AWS revenue grew 22%, with AI services growing "well over 100%," and committed to roughly $200 billion in 2026 capex with the explicit caveat that the spend is "demand-pulled, not faith-based." That framing — only build what you can fill — is the one analysts are now using to triage the four names.
The Chipmaker Read-Through
For the semiconductor complex, the capex update is straightforwardly bullish in dollar terms but more nuanced in mix. The combined $650 billion to $700 billion print includes networking, real estate, power, cooling, and labor — not just silicon. Industry estimates from JPMorgan and Morgan Stanley put the AI silicon component at roughly 35% to 40% of total hyperscaler capex, which implies $230 billion to $280 billion of chip-related spend. Nvidia, which reports earnings on May 20, will capture the majority. But the more interesting trade is downstream. Astera Labs, the connectivity-fabric play, is up 41% year-to-date. Vertiv and Eaton, the power and cooling names, are both up more than 25% in April. The VanEck Semiconductor ETF (SMH) returned 21.91% in April, its largest monthly move since November 2003, per Benzinga's flow data.
The custom-silicon trade is also alive again. Alphabet's TPU v6 production ramp, Microsoft's Maia 200, and Amazon's Trainium 3 all moved from disclosed-existence to disclosed-revenue in this print cycle. Marvell, which co-designs custom ASICs for Microsoft and Amazon, traded up 6% on Thursday. Broadcom, which co-designs Google's TPU and a new program with Meta, was up 4%.
The Risk That Did Not Show Up
The one risk the bears flagged going in — that OpenAI's reported revenue shortfall would force a hyperscaler to revise capex lower — did not materialize. Microsoft, OpenAI's largest backer, reaffirmed the spend. That is partially because the marginal customer for hyperscaler AI capacity is no longer just OpenAI; it is now Anthropic, xAI, the major banks, the federal government, and a long tail of enterprise GenAI deployments. The concentration risk has diluted, which is what makes the spend defensible even if any one frontier-model lab hits an air pocket.
What to watch next
May 20 is the date that matters. Nvidia's Q1 print will be the first hard look at how the $650 billion to $700 billion of hyperscaler commitments translates into a revenue number from the silicon supplier holding 80% market share. Consensus is calling for $52 billion in revenue and $0.92 in EPS. Anything that confirms data center growth above 70% year over year keeps the trade intact. Anything that shows a slowdown — particularly in the China outlook or Blackwell Ultra ramp — and the entire second-tier infrastructure complex from SMH to Vertiv to Astera resets lower. Until then, the capex tape is the trade.

