Broadcom disclosed on , that it has signed a multi-year agreement with Google to design and supply future generations of the search giant's custom artificial intelligence processors, while simultaneously expanding its infrastructure partnership with Anthropic. The deals, filed in a securities filing with the SEC, sent Broadcom shares up 3% in extended trading and underscore the accelerating race among hyperscalers and AI labs to lock down custom silicon capacity for the next half-decade.
What the Broadcom-Google-Anthropic Deal Actually Covers
The core of the announcement is straightforward but substantial. Broadcom will continue designing and manufacturing future versions of Google's TPU chips, the custom AI processors that power everything from Google Search ranking to Gemini model training. The agreement extends through 2031, giving both companies a stable planning horizon for what has become the most capital-intensive segment of the semiconductor industry.
On the Anthropic side, the deal gives the Claude developer access to approximately 3.5 gigawatts of computing capacity drawn from Google's TPU infrastructure. To put that in perspective, 3.5 gigawatts is roughly equivalent to the electricity output of three large nuclear power plants, dedicated entirely to running AI workloads. That capacity will be housed primarily in U.S.-based data centers, according to Anthropic's announcement.
Krishna Rao, Anthropic's chief financial officer, framed the agreement as necessary for meeting explosive demand. "We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development," Rao stated in a company blog post.
The filing itself did not include a specific dollar figure for either deal. However, Mizuho analysts led by Vijay Rakesh have estimated that Broadcom could generate $21 billion in AI revenue from the Anthropic relationship alone in 2026, rising to $42 billion in 2027. Those projections, published after Broadcom's most recent earnings call, help explain why investors reacted positively despite the absence of a headline price tag.
Anthropic's Growth Numbers Tell the Bigger Story
The infrastructure deal only makes sense in the context of how quickly Anthropic has scaled. The company's annualized revenue now exceeds $30 billion, up from approximately $9 billion at the end of last year. That is a more-than-threefold increase in roughly four months, a growth rate that would be difficult to believe if Anthropic had not also disclosed supporting metrics: the company now counts over 1,000 business clients spending more than $1 million annually, a figure that has doubled in just two months.
"For Anthropic, we are off to a very good start in 2026."
Hock Tan, CEO, Broadcom, on the company's Q1 2026 earnings call
Part of that acceleration traces back to a public dispute between Anthropic and the Pentagon in , which paradoxically boosted the company's consumer profile. Claude, Anthropic's AI assistant, became the top free app in the U.S. Apple App Store during that period. Enterprise demand has followed consumer awareness, and the infrastructure to serve that demand is exactly what this deal is designed to provide.
On Broadcom's most recent earnings call, CEO Hock Tan confirmed that the company was already providing 1 gigawatt of compute from Google's TPUs for Anthropic's workloads. "For 2027, this demand is expected to surge in excess of 3 gigawatts of compute," Tan said, previewing the expansion that the Monday filing now formalizes. The jump from 1 gigawatt to 3.5 gigawatts represents the kind of infrastructure ramp that typically takes years to plan and execute, compressed into a timeline that reflects how aggressively AI labs are competing for capacity.
Why Custom Chips Matter More Than Raw GPU Count
The AI industry runs primarily on GPUs from Nvidia, and that will not change overnight. But the Broadcom deal highlights a parallel trend that is gaining momentum: the move toward custom silicon tailored to specific AI workloads. Google's TPUs are not general-purpose chips. They are designed from the ground up for the matrix math operations that dominate machine learning training and inference. Broadcom's role is to translate Google's chip designs into physical hardware that can be manufactured at scale.
Think of the distinction this way. An Nvidia GPU is like a high-performance sports car that handles any road well. A custom TPU is more like a Formula 1 car built for one specific track. It cannot do everything, but on its intended workload, it can be meaningfully faster and more power-efficient. For companies processing trillions of tokens daily across millions of users, that efficiency translates directly into lower operating costs and faster model training.
Google is not the only hyperscaler pursuing this approach. Amazon has its Trainium and Inferentia chips. Microsoft has its Maia accelerator. But the Broadcom partnership gives Google's custom silicon effort something the others lack: a proven, multi-generational manufacturing relationship with one of the most capable chip design firms in the world. The 2031 timeline guarantees continuity across what will likely be three or four generations of TPU hardware.
| Company | Custom AI Chip | Manufacturing Partner | Status |
|---|---|---|---|
| TPU (multi-gen) | Broadcom (through 2031) | Production, scaling | |
| Amazon | Trainium / Inferentia | Annapurna Labs (in-house) | Production |
| Microsoft | Maia | Undisclosed | Early deployment |
| OpenAI | Custom (with Broadcom) | Broadcom | In development |
The Broadcom Positioning: More Than One Customer
Broadcom is not betting exclusively on Google. The company is simultaneously collaborating with OpenAI on custom silicon for AI workloads, a separate engagement that positions Broadcom as a neutral design partner for multiple competing AI labs. This dual-client approach is strategically valuable because it insulates Broadcom from the risk of any single customer relationship cooling, while also giving it insight into the chip requirements of different AI architectures.
OpenAI, for its part, has diversified its own hardware strategy. The company committed to drawing on six gigawatts of AMD's GPUs, with the first gigawatt expected to arrive in the second half of 2026. That makes OpenAI simultaneously a customer of AMD for GPUs, Broadcom for custom chips, and Nvidia through its cloud provider relationships with Microsoft Azure. The era of AI labs relying on a single chip supplier is clearly ending.
For Broadcom, the financial implications are significant. The company's AI-related revenue has been growing faster than any other segment, and the Anthropic-Google deal locks in a multi-year revenue stream that analysts at Mizuho project could reach $42 billion annually by 2027. Broadcom's stock has already reflected some of this optimism, but the 2031 timeline provides the kind of long-term visibility that institutional investors tend to reward.
What This Means for the AI Infrastructure Buildout
The deal arrives at a moment when questions about the sustainability of AI infrastructure spending have moved from analyst notes to front-page stories. Nvidia's massive order backlog, the surge in data center construction, and the strain on electrical grid capacity have all raised concerns about whether the industry is building too much, too fast.
The Broadcom-Google-Anthropic agreement suggests that at least some of the largest players believe demand will justify the investment. Anthropic's revenue trajectory (from $9 billion to $30 billion in four months) provides concrete evidence that enterprise AI adoption is not slowing down. If anything, the constraint is on the supply side: there are not enough chips, not enough data centers, and not enough electrical capacity to meet current demand, let alone projected demand for 2027 and beyond.
The geographic dimension is also worth noting. Anthropic emphasized that most of the new infrastructure will be located in the United States. That aligns with broader policy trends around AI sovereignty and data localization, and it positions the partnership favorably in a regulatory environment where scrutiny of chip supply chains is intensifying on both sides of the Atlantic.
"The progress we achieve in China strengthens our competitiveness worldwide."
Oliver Blume, CEO, Volkswagen Group, on the parallel trend of localized tech investment
The AI chip supply chain is becoming more distributed, with custom silicon from Broadcom, general-purpose GPUs from Nvidia and AMD, and in-house designs from Amazon and Microsoft all competing for data center floor space. The companies that secure reliable, long-term supply agreements now will have a structural advantage when the next generation of AI models requires even more compute than the current generation.
What Comes Next for Broadcom, Google, and Anthropic
The immediate next milestone is execution. Broadcom needs to deliver the next generation of TPU hardware on schedule, Google needs to build out the data center capacity to house it, and Anthropic needs to continue converting its explosive user growth into sustainable enterprise revenue. The 2031 timeline gives all three parties room to iterate, but it also creates accountability: if demand projections prove overly optimistic, the commitments become liabilities rather than assets.
For the broader industry, the deal signals that the custom silicon trend is accelerating, not receding. Nvidia remains the dominant force in AI hardware by a wide margin, but the growing investment in alternatives suggests that the largest AI companies are actively working to reduce their dependence on any single supplier. Whether that diversification ultimately proves to be a hedge against supply risk or a competitive differentiator will depend on how effectively custom chips like Google's TPUs can close the performance gap with Nvidia's latest GPUs.
The question worth watching is not whether Broadcom can build the chips. It clearly can, and has been doing so for Google for years. The question is whether the demand that justifies 3.5 gigawatts of dedicated AI compute, roughly $30 billion in annualized revenue from one company alone, continues to compound at its current rate. If it does, this deal will look like a bargain. If it does not, it will be one of the most expensive infrastructure bets in technology history.













