OpenAI is officially going in-house.
After striking multi-billion-dollar supply deals with Nvidia and AMD earlier this year, the company announced a partnership with Broadcom to co-design and deploy custom AI accelerators — chips engineered specifically for OpenAI’s next wave of frontier models.
The rollout begins in the second half of 2026, with deployments expected to consume 10 gigawatts of electricity, enough to power millions of homes.
The move marks a turning point in OpenAI’s hardware strategy, signaling its intent to own more of its compute stack rather than depend exclusively on external GPU vendors.
“Developing our own accelerators adds to the broader ecosystem of partners building the capacity required to push the frontier of AI,”
said OpenAI CEO Sam Altman in a statement.
Why This Matters
By building custom silicon, OpenAI gains tighter control over performance, cost, and supply — all critical as it races to scale infrastructure for GPT-class models.
The company has already said it will deploy enough Nvidia and AMD chips to draw 16 GW of power. Broadcom’s chips add another 10 GW, pushing OpenAI’s total AI infrastructure to historic levels.
According to people familiar with the project, the goal is not to sell chips commercially, but to embed OpenAI’s model-specific optimizations directly into hardware, accelerating training efficiency and inference speed.
Inside OpenAI’s Bold Hardware Gamble
The partnership underscores a trend sweeping Silicon Valley: the vertical integration of AI compute.
OpenAI joins Google, Amazon, and Meta in pursuing proprietary hardware after years of depending on Nvidia’s H100 and B200 GPUs. Nvidia still dominates the market — controlling over 80 % of AI-training hardware — but even its largest customers are now seeking independence.
Industry sources say OpenAI quietly began recruiting chip engineers from Apple, AMD, and Graphcore nearly two years ago to prototype its internal accelerators. Broadcom, known for its work on Google’s Tensor Processing Units, became the natural fabrication ally.
“This isn’t just about performance — it’s about control,”
said Patrick Moorhead, analyst at Moor Insights & Strategy.
“If OpenAI owns its hardware stack, it dictates its destiny, not Nvidia’s supply chain.”
The $300 Billion AI Infrastructure Arms Race
OpenAI’s move comes amid an unprecedented spending surge. Together, OpenAI, Microsoft, Google, Amazon, and Meta are projected to pour more than $325 billion into AI-focused data centers by the end of 2025, according to Bernstein Research.
OpenAI alone is building massive facilities in Abilene, Texas, and has acquired land across New Mexico, Ohio, and the Midwest for additional campuses. Each will house racks of the new Broadcom-engineered accelerators once deployment begins in 2026.
What Makes the Broadcom Deal Different
Unlike OpenAI’s arrangements with Nvidia and AMD — which involved stock or investment components — the Broadcom partnership is strictly a co-design and manufacturing alliance.
- Nvidia committed up to $100 billion in infrastructure investments tied to OpenAI’s projects.
- AMD offered OpenAI a warrant to purchase 160 million shares, roughly 10 % of its equity.
- Broadcom, however, takes no equity stake; it simply fabricates and co-develops the chips — leaving OpenAI in full control of the intellectual property.
This gives OpenAI long-term leverage: the ability to negotiate supply on its own terms while avoiding dependency on a single vendor.
Industry Response
The reaction was swift. Broadcom’s shares climbed nearly 8 % after the announcement, while Nvidia’s briefly dipped before stabilizing. Analysts called the deal a “strategic hedge” — not a total break from Nvidia, but a signal that customers with massive AI workloads want optionality.
AMD, in a statement to Reuters, said it continues to supply OpenAI with advanced GPUs and is “collaborating closely on next-generation AI infrastructure.”
Meanwhile, Google, which already co-designs its TPUs with Broadcom, declined to comment, though insiders note the partnership mirrors its own approach: internal designs, external manufacturing.
The $10 Billion Question
The announcement also reignited speculation around Broadcom’s mysterious $10 billion customer, mentioned during its Q3 earnings call.
For weeks, analysts assumed that unnamed buyer was OpenAI. But Charlie Kawwas, Broadcom’s semiconductor group president, denied the link during a joint CNBC appearance with OpenAI’s Greg Brockman.
“I’d love to take a $10 billion purchase order from Greg,” Kawwas joked. “He hasn’t given me that PO yet.”
Still, analysts suggest OpenAI’s long-term compute demand could easily exceed that number once the Broadcom rollout matures through 2029.
Technical Hurdles and Risks Ahead
Even with Broadcom’s expertise, building 10 GW of cutting-edge chips is a monumental challenge.
OpenAI will still rely on TSMC and other foundries for fabrication — a supply chain already stretched by global demand for AI semiconductors.
There’s also integration risk: ensuring the new chips interface seamlessly with existing Nvidia-based infrastructure, memory systems, and networking hardware.
A misstep could slow model training or inflate costs — precisely what OpenAI is trying to avoid.
What Comes Next
The companies plan to begin pilot production in late 2026, scaling deployments through 2029.
Broadcom engineers will work on-site with OpenAI’s infrastructure teams to iterate on each chip generation. The design will likely evolve in tandem with OpenAI’s forthcoming GPT-6 and GPT-7 models, embedding model-level learnings directly into silicon.
“By building our own chip, we can embed what we’ve learned from creating frontier models directly into the hardware,”
said Greg Brockman, OpenAI’s president,
“unlocking new levels of capability and intelligence.”
The Bigger Picture
OpenAI’s custom chip push is part of a wider geopolitical and economic battle for AI compute — one that links Washington’s semiconductor strategy to the future of global innovation.
The U.S. government has encouraged domestic chip partnerships to secure supply chains against overseas disruptions. Analysts see OpenAI’s Broadcom tie-up as aligned with that push: keeping chip IP and assembly largely within North American control.
If successful, OpenAI could emerge not only as a software powerhouse but also as a foundational hardware player, reshaping how AI infrastructure is built and financed.
Conclusion
OpenAI’s alliance with Broadcom represents a pivotal shift from software-first to full-stack ownership.
If the 10 GW rollout succeeds, it could permanently loosen Nvidia’s grip on the AI hardware market — and mark the beginning of an era where AI companies don’t just train models; they manufacture their own intelligence.