CONNECT WITH US

Tech

Google Is Quietly Building a Third Chip Empire and Marvell Just Got Drafted Into It

Google Is Quietly Building a Third Chip Empire and Marvell Just Got Drafted Into It

Google is in talks with Marvell Technology to develop two new chips aimed at running AI models more efficiently, according to two people with knowledge of the discussions. One of the chips is a memory processing unit designed to work with Google's Tensor Processing Unit, and the other is a new TPU built specifically for running AI models. TS2 The companies aim to finalize the design of the memory processing unit as soon as next year before handing it off for test production. Yahoo Finance Neither Google nor Marvell responded to requests for comment, and the talks have not yet produced a signed contract.

The timing of the announcement is notable. The discussions came just days after Broadcom — Google's primary custom chip partner — announced a long-term agreement to design and supply TPUs and networking components through 2031. The Irish Times That might sound like a contradiction. It isn't. The timing suggests Google is not replacing Broadcom but adding a third design partner to a supply chain that already includes Broadcom for high-performance chip variants, MediaTek for cost-optimized variants at 20 to 30% lower cost, and TSMC for fabrication. The strategy is diversification, not substitution. The Irish Times

What Google is building, in other words, is a custom silicon ecosystem with multiple specialist contributors rather than a single dominant supplier. Marvell, if the deal closes, would take on a design-services role similar to MediaTek's involvement on Google's latest Ironwood TPU. The Irish Times That's a meaningful distinction — it positions Marvell as a design collaborator rather than a commodity manufacturer, which is where the high-margin, long-tenure relationships in the chip industry tend to live.

The rationale for the new chips comes down to one word: inference. Google's seventh-generation TPU, Ironwood, debuted this month as what the company calls "the first Google TPU for the age of inference." It delivers ten times the peak performance of the TPU v5p and scales to 9,216 liquid-cooled chips in a superpod spanning roughly 10 megawatts, producing 42.5 FP8 exaflops. The Irish Times Training AI models is still expensive, but inference — the process of actually serving responses to users at scale — has become the dominant and fastest-growing cost center for every major AI platform. Google is engineering its entire chip roadmap around that shift.

Marvell is well-positioned to help. The company's data center revenue reached a record $6.1 billion in its fiscal year ending February 2026, with total revenue of $8.2 billion, up 42% year over year. It runs a custom silicon business with a $1.5 billion annual run rate across 18 cloud-provider design wins, building chips for Amazon's Trainium processors, Microsoft's Maia AI accelerator, and Meta's new data processing unit, in addition to existing work with Google on the Axion ARM CPU. The Irish Times If the Google talks convert into a signed deal, Marvell will be embedded inside the silicon supply chains of all four of the world's largest cloud companies simultaneously — a position with enormous strategic leverage.

Marvell's recent moves suggest it has been preparing for exactly this moment. In December 2025, Marvell acquired Celestial AI for up to $5.5 billion, gaining photonic interconnect technology that CEO Matt Murphy said would deliver "the industry's most complete connectivity platform for AI and cloud customers." The Irish Times And at the end of March, Nvidia invested $2 billion in Marvell, partnering through NVLink Fusion to integrate Marvell's custom chips and networking with Nvidia's interconnect fabric — positioning the company at the intersection of both the GPU and ASIC ecosystems simultaneously. The Irish Times That's a remarkable position to occupy: Marvell is now a strategic partner to Nvidia, while also helping Nvidia's biggest customers build the custom chips designed to reduce their dependence on Nvidia GPUs. The chip industry runs on that kind of elegant tension.

Alphabet projects capital spending could climb as high as $185 billion in 2026, ramping up its investment in servers, data centers, and networking to fuel AI efforts. Google Cloud's revenue in the December quarter jumped 48%, hitting $17.7 billion. The Information Custom silicon is the lever that lets Google extract more performance per dollar from that massive infrastructure spend — and TPU sales have become a visible proof point for investors trying to assess whether Google's AI investment is actually producing returns. Marvell's CEO Matt Murphy pointed to bookings "continuing to grow at a record pace" following a string of design wins that reached a new peak in fiscal 2026, and the company in March guided toward almost $15 billion in fiscal 2028 revenue. The Information

The broader market context matters here. The custom ASIC market is projected to grow 45% in 2026, compared with just 16% growth in GPU shipments, and is on track to reach $118 billion by 2033. The Irish Times That gap in growth rates is the central pressure point driving every hyperscaler's chip strategy right now. GPU clusters are flexible and available; custom ASICs are purpose-built and efficient. At the scale Google, Amazon, Meta, and Microsoft operate at, the efficiency gap translates directly into billions of dollars in infrastructure cost. Broadcom, which commands more than 70% market share in custom AI accelerators, saw AI revenue hit $8.4 billion in its most recent quarter, up 106% year over year — with guidance of $10.7 billion for the following quarter and a stated target of $100 billion in AI chip revenue by 2027. The Irish Times

Marvell's share price jumped nearly 5% on the news, which tells you what the market thinks the strategic value of the Google relationship would be if it closes. Whether the two companies get to a signed contract is still an open question. What isn't open to question is the direction of travel: the custom chip arms race is accelerating, the inference inflection is here, and every major hyperscaler is now in the business of designing its own silicon alongside — and increasingly in competition with — the GPU suppliers that got them this far.

Google's message to the market is increasingly clear. It doesn't want to be dependent on any single supplier for the hardware that runs its AI future. Marvell, if it can close this deal, becomes one of the most important companies in that strategy — and the chip industry's quiet kingmaker just got a lot more attention.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It's possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi