The four largest technology companies in the world (Microsoft, Meta, Alphabet, and Amazon) are collectively expected to spend more than $470 billion on capital expenditures in 2026, up from roughly $350 billion the year before. That $120 billion year-over-year increase, reported by CNBC in late January, is not a rounding error. It is a coordinated, industry-wide bet that the companies pouring the most concrete and copper into artificial intelligence infrastructure today will be the ones capturing the most economic value a decade from now. The question Wall Street is starting to ask, with increasing volume, is whether that bet is sound, or whether the technology sector is laying the foundations for a very expensive miscalculation.
To put the scale of that number in terms that don't require a finance degree: $470 billion is roughly the annual economic output of Norway. It is more than the United States spends on its entire federal education budget in a given year, multiplied by six. Spread across a calendar year, it amounts to more than $1.2 billion per day, every day, in data centers, networking equipment, AI chips, and the energy infrastructure to power them. The four companies are effectively building a second internet, not to connect people to information, but to train machines to generate it.
Microsoft: Azure Growth and the Copilot Adoption Gap
Microsoft enters 2026 carrying the most visible AI story in the technology industry. Its multi-billion-dollar partnership with OpenAI positioned it as the company that first brought generative AI (artificial intelligence that produces text, images, code, and other content rather than simply classifying or retrieving it) into mainstream enterprise software. But the financial data coming into earnings season told a more complicated story.
Analysts expected Microsoft's capital expenditures for fiscal year 2026, which ends in June, to reach approximately $99 billion. That figure alone would place the company among the largest infrastructure investors in human history. In the second fiscal quarter, Microsoft reported $36.25 billion in capex, a 60 percent increase year over year. The company's Azure cloud division, which competes with Amazon Web Services and Google Cloud, was expected to post revenue growth of 37 percent at constant currency, a number that, in isolation, sounds strong. In context, it reflects a business growing fast enough to justify the spending, but not so fast as to silence skeptics entirely.
The more pointed concern emerged from enterprise adoption data. According to analyst surveys, more than half of organizations that have purchased Microsoft 365 Copilot (the AI assistant embedded in Word, Excel, Teams, and Outlook) are licensing it for 10 percent or fewer of their Microsoft 365 users. Think of that as a company buying a gym membership for 1,000 employees but only 90 of them ever walking through the door. The technology is present. The utilization is not.
This matters because Microsoft's AI revenue thesis depends not just on selling Copilot licenses but on those licenses expanding across the user base as workers discover genuine productivity gains. If the product lands in the hands of a small early-adopter cohort and fails to spread, the high per-seat pricing becomes difficult to sustain against competing products that may offer comparable functionality at lower cost. Microsoft has not publicly disputed these adoption figures, but CEO Satya Nadella has pointed to expanding developer tooling, Azure AI services, and GitHub Copilot as evidence that the enterprise AI opportunity extends well beyond the M365 suite.
Meta: The $110 Billion Infrastructure Vision
Mark Zuckerberg has made no effort to hide his ambitions. Meta lifted its 2025 capital expenditure guidance to a range of $70 to $72 billion, a figure that, by itself, would rank as one of the largest single-company infrastructure investments in any given year. For 2026, FactSet consensus estimates put Meta's capex at approximately $110 billion, while Goldman Sachs has penciled in $125 billion. The wide range reflects genuine uncertainty about how aggressively Zuckerberg will push forward on his stated goal of building artificial general intelligence (systems that can perform any cognitive task a human can perform) before the end of the decade.
"We're seeing the returns in the core business," said Mark Zuckerberg, CEO of Meta, in late January, pointing to advertising revenue growth that has consistently exceeded analyst expectations over the past two years.
Mark Zuckerberg, CEO, Meta
That framing is important: Meta is not funding its AI ambitions through debt or dilution. It is funding them through an advertising business that generated more than $160 billion in revenue in 2025, powered in part by AI recommendation systems that have made Facebook and Instagram feeds more engaging (and therefore more valuable to advertisers) than they were four years ago.
On the model development side, Meta completed the acquisition of Scale AI (the data-labeling and AI evaluation company) for $14.3 billion, adding significant training infrastructure and human evaluation capacity to its AI development pipeline. The company is also reportedly working on a new large language model internally codenamed "Avocado," with capabilities intended to compete directly with OpenAI's frontier models. Scale AI's data operations give Meta a meaningful advantage in generating the high-quality training datasets that increasingly differentiate leading models from the rest of the field.
What distinguishes Meta's position from its peers is that its AI investment has two distinct payoff mechanisms: short-term returns through improved advertising algorithms, and long-term returns through whatever emerges from its push toward artificial general intelligence, whether that is consumer products, enterprise licensing, or something not yet clearly defined.
Alphabet: The Google AI Comeback
For much of 2023 and early 2024, Alphabet found itself in the uncomfortable position of being the company that invented much of modern AI but appeared to be losing the public narrative to OpenAI. That story has shifted considerably. Alphabet lifted its 2025 capital expenditure guidance to a range of $91 to $93 billion and guided investors toward a "significant increase" in 2026, with analyst consensus placing that figure above $115 billion.
The strategic picture has also clarified in ways that make Alphabet's position appear stronger than its early stumbles suggested. The company has active commercial relationships with both OpenAI and Anthropic (the two leading independent AI laboratories) giving it distribution and revenue exposure to AI products it did not build itself, while simultaneously developing its own Gemini model family. The deal with Apple, through which Google's Gemini technology powers features inside Apple's Siri, is particularly significant: it places Gemini inside more than one billion active iPhone users' hands, providing both revenue and the kind of real-world usage data that helps refine large language models faster than synthetic benchmarks can.
Alphabet's stock posted its best annual performance since 2009 last year, a signal that investors have largely accepted the company's repositioning from a search-and-advertising incumbent to an AI-infrastructure and cloud business with advertising still generating substantial cash. Google Cloud, which competes with Azure and AWS, has been the fastest-growing segment of the business for several consecutive quarters. The challenge for Alphabet is sustaining that growth while also protecting its core search advertising revenue from AI-powered search alternatives, including, somewhat paradoxically, products built on top of its own Gemini API.
Amazon: The $125 Billion Infrastructure Commitment
Amazon's capital expenditure commitment for 2026 stands at $125 billion, the highest absolute number among the four hyperscalers. Amazon Web Services, which provides cloud computing infrastructure to a significant portion of the global internet, has become the platform of choice for organizations building AI applications, a position that generates both revenue and a flywheel effect, since every AI workload run on AWS produces data that helps Amazon understand where to build more capacity.
The OpenAI relationship that exists between Amazon and its primary cloud rivals took a notable turn in January. AWS signed a $38 billion deal with OpenAI (the company Microsoft is most closely associated with) to provide cloud computing services. The deal does not end OpenAI's relationship with Microsoft Azure but signals that OpenAI, like most large enterprises, intends to run workloads across multiple cloud providers rather than committing exclusively to one. Amazon is also reportedly considering a direct investment in OpenAI of up to $10 billion, which would add to its existing, long-standing backing of Anthropic, the AI safety company that most recently raised at a valuation of approximately $350 billion.
The Anthropic relationship is worth examining closely. Amazon has invested heavily in the company founded by former OpenAI researchers Dario Amodei and Daniela Amodei, with Anthropic's Claude models available natively through AWS's Bedrock platform. That investment was made in a political climate that has since shifted considerably: several of Amazon's most senior executives were visible supporters of the Trump administration's return to power in 2024, attending the inauguration and making substantial political donations. The Trump administration has generally positioned itself as skeptical of AI safety regulation. Backing Anthropic (a company whose founding mission is explicitly about reducing the risks of powerful AI) sits in tension with that political alignment, a contradiction that has not gone unnoticed in Washington or in the AI research community. For more on how big tech has aligned with the administration's AI agenda, see Big Tech backs Anthropic and Trump AI policy.
The AI Bubble Question Nobody Wants to Answer
The word "bubble" has an uncomfortable history in technology coverage. It was overused in 2021 and 2022, applied to everything from NFTs to social media startups in ways that diluted its analytical usefulness. But the combination of numbers now visible in AI infrastructure spending makes the question impossible to avoid entirely.
OpenAI alone has accumulated financial commitments (including the $500 billion Stargate project announced by President Trump in January) that sum to approximately $1.4 trillion. That number represents more than the annual gross domestic product of Australia. The company that produced ChatGPT, which has been commercially available for approximately three years, now carries financial obligations that exceed the market capitalization of every company in the world except a handful. Whether the revenue projections that justify those commitments will materialize, and on what timeline, remains genuinely unknown.
The structural concern is straightforward. Think of the AI infrastructure buildout as analogous to the fiber-optic cable boom of the late 1990s. Telecommunications companies spent hundreds of billions laying cable across the ocean floors and through cities worldwide, correctly anticipating that internet traffic would grow enormously. What they miscalculated was the pace. Traffic did grow enormously, but it took fifteen years, not three. The companies that borrowed to build at the peak of the excitement did not survive long enough to see their infrastructure fully utilized. The ones that survived were those with the balance sheet strength to weather the gap between investment and return.
Microsoft, Meta, Alphabet, and Amazon all have that balance sheet strength. None of them borrowed to fund their AI capex; all four are financing their buildouts from operating cash flow. That is a material difference from the leveraged infrastructure bets of the dot-com era. But it does not resolve the fundamental question of whether demand for AI compute will grow fast enough to justify $470 billion in annual infrastructure spending across just four companies, let alone the additional billions being committed by Oracle, CoreWeave, and the hyperscalers of other regions. Defense-sector AI spending also continues to surge: see Shield AI raises $2B as defense tech valuations climb.
What the Skeptics Are Watching
"The buildout phase always looks excessive at the beginning. The question is whether the use cases are real. And increasingly, the evidence says they are," said Dan Ives, Managing Director at Wedbush Securities, in January.
Dan Ives, Managing Director, Wedbush Securities
Dan Ives has been among the most consistent bulls on AI infrastructure spending, arguing that enterprise AI adoption is still in its earliest innings and that the companies building the picks-and-shovels of the AI era will see demand justify their capital commitments within three to five years.
The skeptic camp, represented by analysts including Bernstein's Mark Moerdler, points to the Copilot adoption data as a leading indicator of concern. If enterprises are licensing AI tools but not deploying them broadly, it suggests that the productivity gains from generative AI are harder to realize in practice than in demonstration. Moerdler has noted that the gap between AI enthusiasm at the C-suite level and actual workflow integration at the user level remains large across most industries, and that closing that gap requires not just better software but significant change management investment that most organizations have not yet made.
The Apple situation adds another layer of complexity to the picture. Apple (which released its Apple Intelligence suite in late 2024 to a reception that was widely described as underwhelming) pushed back a significant Siri AI revamp that had been anticipated for early 2026. The iPhone 17 launch carried positive momentum, but the AI software that was supposed to differentiate it remains behind schedule. Apple's willingness to integrate Google's Gemini into Siri, rather than relying solely on its own models, is either a pragmatic acknowledgment of where Google's capabilities stand or a sign that Apple's internal AI development is moving slower than its hardware roadmap requires, or both.
Tesla presents a different kind of AI story entirely. The company's vehicle deliveries fell 8.6 percent in 2025, a decline CEO Elon Musk attributed in part to a gap between the current product lineup and the autonomous vehicles he has long promised. Musk's focus has shifted visibly toward robotaxis and Optimus humanoid robots, both of which depend on AI systems that Tesla is developing internally on new chips being manufactured in partnership with Samsung and TSMC. Tesla's expected capex of $11 billion in 2026 is a fraction of its hyperscaler peers, but its AI ambitions are, if anything, more sweeping in scope, touching physical robotics in a way that cloud software does not.
Can $470 Billion Pay Off?
The honest answer is that the return timeline is genuinely uncertain, and the hyperscalers themselves are not claiming otherwise. What they are claiming (and what the financial data supports, at least partially) is that AI infrastructure is already generating revenue, not just promise. Azure's 37 percent growth, AWS's sustained cloud expansion, Google Cloud's accelerating enterprise contracts, and Meta's advertising revenue improvement are all, to varying degrees, attributable to AI investments made in previous years.
The short version: the first wave of AI investment is producing measurable returns. The interesting part is whether the second wave, which is five times larger, can do the same.
What the next eighteen months will reveal is whether enterprise AI adoption moves from early adopters to broad organizational deployment, the critical transition that would validate the demand side of the $470 billion equation. The Copilot adoption data suggests that transition is not happening automatically. The AWS and Azure revenue data suggests it is not failing entirely. Somewhere between those two data points lies the actual trajectory of the AI economy, and the four largest technology companies in the world have staked an amount of money equivalent to Norway's annual output on getting that trajectory right.
The fiber-optic analogy is instructive but not determinative. The companies burying cable in the 1990s were right about where the world was going. The ones that survived long enough to see it were the ones that did not overextend. Whether the hyperscalers of 2026, funded by their own cash flows and with hundreds of millions of paying customers already using AI products, are in a position analogous to the survivors or the casualties of that earlier era is the question that earnings season will begin, but almost certainly not finish, answering.




