
兩個月前,英偉達(Nvidia)與OpenAI公布了一項重磅計劃,擬部署至少10吉瓦英偉達系統(tǒng),投資規(guī)模高達1,000億美元。如今,這家芯片巨頭承認:該協(xié)議實際上尚未最終敲定。
上周二,在美國亞利桑那州斯科茨代爾舉行的瑞銀(UBS)全球科技與人工智能大會上,英偉達執(zhí)行副總裁兼首席財務官科萊特·克雷斯向投資者表示,與OpenAI的這項備受矚目的合作目前仍停留在意向書階段。
在被問及10吉瓦承諾已實際落實多少時,克雷斯坦言:“我們仍未完成最終協(xié)議的簽署?!?/p>
這番表態(tài)對外界而言頗具沖擊力。這項合作曾被英偉達首席執(zhí)行官黃仁勛稱為“史上最大規(guī)模AI基礎設施項目”。分析師此前估計,該合作未來可能為英偉達帶來高達5,000億美元的收入。
雙方在今年9月宣布合作時提出一項計劃,將在數(shù)年內部署數(shù)百萬顆英偉達GPU,并將配套建設高達10吉瓦的數(shù)據(jù)中心容量。英偉達承諾,隨著各期項目落地,將向OpenAI投資最高1,000億美元。這一消息一度推動AI基建概念股大漲,英偉達股價隨之上漲4%,強化了市場對兩家公司“深度綁定”的預期。
然而,從克雷斯的最新表態(tài)來看,即便合作框架發(fā)布已數(shù)月,但這項協(xié)議仍存在較大變數(shù)。
尚未落地的“超級大單”
目前尚不清楚協(xié)議為何遲遲未能簽署。不過,從英偉達最新提交的10-Q文件中可以找到一些線索。文件明確指出,公司“無法保證任何投資都能按照預期條款完成,甚至可能根本無法完成”。這一表述不僅涉及與OpenAI的合作,還包括對Anthropic的100億美元投資計劃,以及對英特爾(Intel)的50億美元投資承諾。
在篇幅頗長的“風險因素”部分,英偉達詳細說明了此類超級交易背后的脆弱架構。公司強調,計劃能否實現(xiàn),取決于全球是否有能力為支持英偉達系統(tǒng)運行建設數(shù)據(jù)中心并供電。英偉達必須提前一年以上訂購GPU、HBM內存、網(wǎng)絡設備以及其他組件,而且往往要通過不可撤銷的預付款合同完成訂購。英偉達警告稱,如果客戶縮減需求、延遲融資或改變方向,公司可能面臨“庫存過?!?、“取消訂單的罰金”或“計提存貨跌價與減值”的風險。文件指出,以往供需錯配曾“嚴重損害公司的財務業(yè)績”。
更大的不確定性來自物理世界本身。英偉達表示,“數(shù)據(jù)中心容量、電力和資本”的可用性,是客戶能否部署口頭承諾的AI系統(tǒng)的關鍵。文件稱電力基礎設施建設是一個“將耗時數(shù)年的過程”,會面臨“監(jiān)管、技術和施工方面的挑戰(zhàn)”。英偉達警告稱,如果客戶無法獲得足夠的電力或融資,可能“延遲客戶部署進度,或縮小其部署規(guī)模”,進而影響AI的實際采用速度。
英偉達也承認,自身的創(chuàng)新節(jié)奏加大了規(guī)劃難度。從Hopper,到Blackwell,再到Vera Rubin,英偉達已轉向每年推出一個新架構的節(jié)奏,同時還需繼續(xù)支持舊架構。公司表示,更快的架構迭代“可能加劇需求預測的難度”,并導致“對現(xiàn)有產品的需求下降”。
這些表態(tài)也呼應了“唱空AI行情”的投資人的警告。如因《大空頭》(Big Short)而聞名的邁克爾·伯里曾指出,英偉達及其他芯片制造商過度延長芯片的使用壽命,一旦芯片貶值,將導致投資周期斷裂。不過,黃仁勛此前表示,英偉達六年前的芯片至今仍在滿負荷運行。
英偉達還明確提及此前與“熱門”應用場景相關的繁榮與蕭條周期,例如加密貨幣挖礦。該公司警告稱,新一代AI工作負載也可能出現(xiàn)類似的需求飆升與回落,這類難以預測的波動可能導致大量二手GPU流入灰色市場。
盡管協(xié)議尚未正式簽署,克雷斯仍強調,英偉達與OpenAI的合作關系“非常穩(wěn)固”,至今已有十多年歷史。她表示,OpenAI將英偉達視為其“首選算力合作伙伴”。不過她補充稱,公司當前的銷售指引并不依賴這項新的超級協(xié)議。
克雷斯指出,英偉達預測2025至2026年間,Blackwell和Vera Rubin系統(tǒng)的需求將達到約5,000億美元,其中“并不包含我們當前與OpenAI就協(xié)議下一階段展開的任何工作”。目前,OpenAI的相關采購主要通過微軟(Microsoft)和甲骨文(Oracle)等云服務合作伙伴間接完成,而非采用意向書中規(guī)劃的直接采購模式。
克雷斯表示,OpenAI“的確希望直接采購。但需要再次強調的是,我們仍在致力于推動最終協(xié)議的落地?!?/p>
英偉達堅稱“護城河”完好無損
在談及競爭格局時,克雷斯的態(tài)度十分明確。近期市場追捧的谷歌(Google)TPU,被視為英偉達GPU的潛在競爭者。TPU的適用范圍雖小于GPU,但功耗更低。當被問及這類ASIC芯片(專用集成電路)是否正在縮小英偉達的領先優(yōu)勢時,克雷斯回答道:“絕對沒有?!?/p>
她表示:“我們當前的重點不僅是支持各類模型開發(fā)者,同時也要為眾多企業(yè)提供完整的技術解決方案?!痹诳死姿箍磥?,英偉達真正的“護城河”從來不是某一款芯片,而是涵蓋硬件、CUDA架構,以及持續(xù)擴展的行業(yè)專用軟件庫在內的整個平臺體系。因此,即便Blackwell成為新的行業(yè)標準,舊架構依然被廣泛使用。
克雷斯表示:“所有企業(yè)都在使用我們的平臺。無論是在云端還是在本地部署,所有模型都在我們的平臺上運行?!保ㄘ敻恢形木W(wǎng))
譯者:劉進龍
審校:汪皓
兩個月前,英偉達(Nvidia)與OpenAI公布了一項重磅計劃,擬部署至少10吉瓦英偉達系統(tǒng),投資規(guī)模高達1,000億美元。如今,這家芯片巨頭承認:該協(xié)議實際上尚未最終敲定。
上周二,在美國亞利桑那州斯科茨代爾舉行的瑞銀(UBS)全球科技與人工智能大會上,英偉達執(zhí)行副總裁兼首席財務官科萊特·克雷斯向投資者表示,與OpenAI的這項備受矚目的合作目前仍停留在意向書階段。
在被問及10吉瓦承諾已實際落實多少時,克雷斯坦言:“我們仍未完成最終協(xié)議的簽署?!?/p>
這番表態(tài)對外界而言頗具沖擊力。這項合作曾被英偉達首席執(zhí)行官黃仁勛稱為“史上最大規(guī)模AI基礎設施項目”。分析師此前估計,該合作未來可能為英偉達帶來高達5,000億美元的收入。
雙方在今年9月宣布合作時提出一項計劃,將在數(shù)年內部署數(shù)百萬顆英偉達GPU,并將配套建設高達10吉瓦的數(shù)據(jù)中心容量。英偉達承諾,隨著各期項目落地,將向OpenAI投資最高1,000億美元。這一消息一度推動AI基建概念股大漲,英偉達股價隨之上漲4%,強化了市場對兩家公司“深度綁定”的預期。
然而,從克雷斯的最新表態(tài)來看,即便合作框架發(fā)布已數(shù)月,但這項協(xié)議仍存在較大變數(shù)。
尚未落地的“超級大單”
目前尚不清楚協(xié)議為何遲遲未能簽署。不過,從英偉達最新提交的10-Q文件中可以找到一些線索。文件明確指出,公司“無法保證任何投資都能按照預期條款完成,甚至可能根本無法完成”。這一表述不僅涉及與OpenAI的合作,還包括對Anthropic的100億美元投資計劃,以及對英特爾(Intel)的50億美元投資承諾。
在篇幅頗長的“風險因素”部分,英偉達詳細說明了此類超級交易背后的脆弱架構。公司強調,計劃能否實現(xiàn),取決于全球是否有能力為支持英偉達系統(tǒng)運行建設數(shù)據(jù)中心并供電。英偉達必須提前一年以上訂購GPU、HBM內存、網(wǎng)絡設備以及其他組件,而且往往要通過不可撤銷的預付款合同完成訂購。英偉達警告稱,如果客戶縮減需求、延遲融資或改變方向,公司可能面臨“庫存過剩”、“取消訂單的罰金”或“計提存貨跌價與減值”的風險。文件指出,以往供需錯配曾“嚴重損害公司的財務業(yè)績”。
更大的不確定性來自物理世界本身。英偉達表示,“數(shù)據(jù)中心容量、電力和資本”的可用性,是客戶能否部署口頭承諾的AI系統(tǒng)的關鍵。文件稱電力基礎設施建設是一個“將耗時數(shù)年的過程”,會面臨“監(jiān)管、技術和施工方面的挑戰(zhàn)”。英偉達警告稱,如果客戶無法獲得足夠的電力或融資,可能“延遲客戶部署進度,或縮小其部署規(guī)?!保M而影響AI的實際采用速度。
英偉達也承認,自身的創(chuàng)新節(jié)奏加大了規(guī)劃難度。從Hopper,到Blackwell,再到Vera Rubin,英偉達已轉向每年推出一個新架構的節(jié)奏,同時還需繼續(xù)支持舊架構。公司表示,更快的架構迭代“可能加劇需求預測的難度”,并導致“對現(xiàn)有產品的需求下降”。
這些表態(tài)也呼應了“唱空AI行情”的投資人的警告。如因《大空頭》(Big Short)而聞名的邁克爾·伯里曾指出,英偉達及其他芯片制造商過度延長芯片的使用壽命,一旦芯片貶值,將導致投資周期斷裂。不過,黃仁勛此前表示,英偉達六年前的芯片至今仍在滿負荷運行。
英偉達還明確提及此前與“熱門”應用場景相關的繁榮與蕭條周期,例如加密貨幣挖礦。該公司警告稱,新一代AI工作負載也可能出現(xiàn)類似的需求飆升與回落,這類難以預測的波動可能導致大量二手GPU流入灰色市場。
盡管協(xié)議尚未正式簽署,克雷斯仍強調,英偉達與OpenAI的合作關系“非常穩(wěn)固”,至今已有十多年歷史。她表示,OpenAI將英偉達視為其“首選算力合作伙伴”。不過她補充稱,公司當前的銷售指引并不依賴這項新的超級協(xié)議。
克雷斯指出,英偉達預測2025至2026年間,Blackwell和Vera Rubin系統(tǒng)的需求將達到約5,000億美元,其中“并不包含我們當前與OpenAI就協(xié)議下一階段展開的任何工作”。目前,OpenAI的相關采購主要通過微軟(Microsoft)和甲骨文(Oracle)等云服務合作伙伴間接完成,而非采用意向書中規(guī)劃的直接采購模式。
克雷斯表示,OpenAI“的確希望直接采購。但需要再次強調的是,我們仍在致力于推動最終協(xié)議的落地。”
英偉達堅稱“護城河”完好無損
在談及競爭格局時,克雷斯的態(tài)度十分明確。近期市場追捧的谷歌(Google)TPU,被視為英偉達GPU的潛在競爭者。TPU的適用范圍雖小于GPU,但功耗更低。當被問及這類ASIC芯片(專用集成電路)是否正在縮小英偉達的領先優(yōu)勢時,克雷斯回答道:“絕對沒有。”
她表示:“我們當前的重點不僅是支持各類模型開發(fā)者,同時也要為眾多企業(yè)提供完整的技術解決方案?!痹诳死姿箍磥?,英偉達真正的“護城河”從來不是某一款芯片,而是涵蓋硬件、CUDA架構,以及持續(xù)擴展的行業(yè)專用軟件庫在內的整個平臺體系。因此,即便Blackwell成為新的行業(yè)標準,舊架構依然被廣泛使用。
克雷斯表示:“所有企業(yè)都在使用我們的平臺。無論是在云端還是在本地部署,所有模型都在我們的平臺上運行?!保ㄘ敻恢形木W(wǎng))
譯者:劉進龍
審校:汪皓
Two months after Nvidia and OpenAI unveiled their eye-popping plan to deploy at least 10 gigawatts of Nvidia systems—and up to $100 billion in investments—the chipmaker now admits the deal isn’t actually final.
Speaking Tuesday at the UBS Global Technology and AI Conference in Scottsdale, Nvidia EVP and CFO Colette Kress told investors that the much-hyped OpenAI partnership is still at the letter-of-intent stage.
“We still haven’t completed a definitive agreement,” Kress said when asked how much of the 10-gigawatt commitment is actually locked in.
That’s a striking clarification for a deal that Nvidia CEO Jensen Huang once called “the biggest AI infrastructure project in history.” Analysts had estimated that the deal could generate as much as $500 billion in revenue for the AI chipmaker.
When the companies announced the partnership in September, they outlined a plan to deploy millions of Nvidia GPUs over several years, backed by up to 10 gigawatts of data center capacity. Nvidia pledged to invest up to $100 billion in OpenAI as each tranche comes online. The news helped fuel an AI-infrastructure rally, sending Nvidia shares up 4% and reinforcing the narrative that the two companies are joined at the hip.
Kress’s comments suggest something more tentative, even months after the framework was released.
A megadeal that isn’t in the numbers—yet
It’s unclear why the deal hasn’t been executed, but Nvidia’s latest 10-Q offers clues. The filing states plainly that “there is no assurance that any investment will be completed on expected terms, if at all,” referring not only to the OpenAI arrangement but also to Nvidia’s planned $10 billion investment in Anthropic and its $5 billion commitment to Intel.
In a lengthy “Risk Factors” section, Nvidia spells out the fragile architecture underpinning megadeals like this one. The company stresses that the story is only as real as the world’s ability to build and power the data centers required to run its systems. Nvidia must order GPUs, HBM memory, networking gear, and other components more than a year in advance, often via non-cancelable, prepaid contracts. If customers scale back, delay financing, or change direction, Nvidia warns it may end up with “excess inventory,” “cancellation penalties,” or “inventory provisions or impairments.” Past mismatches between supply and demand have “significantly harmed our financial results,” the filing notes.
The biggest swing factor seems to be the physical world: Nvidia says the availability of “data center capacity, energy, and capital” is critical for customers to deploy the AI systems they’ve verbally committed to. Power build-out is described as a “multiyear process” that faces “regulatory, technical, and construction challenges.” If customers can’t secure enough electricity or financing, Nvidia warns, it could “delay customer deployments or reduce the scale” of AI adoption.
Nvidia also admits that its own pace of innovation makes planning harder. It has moved to an annual cadence of new architectures—Hopper, Blackwell, Vera Rubin—while still supporting prior generations. It notes that a faster architecture pace “may magnify the challenges” of predicting demand and can lead to “reduced demand for current generation” products.
These admissions nod to the warnings of AI bears like investor of Big Short fame Michael Burry, who has alleged that Nvidia and other chipmakers are overextending the useful lives of their chips and that the chips’ eventual depreciation will cause breakdowns in the investment cycle. However, Huang has said that chips from six years ago are still running at full pace.
The company also nodded explicitly to past boom-bust cycles tied to “trendy” use cases like crypto mining, warning that new AI workloads could create similar spikes and crashes that are hard to forecast and can flood the gray market with secondhand GPUs.
Despite the lack of a deal, Kress stressed that Nvidia’s relationship with OpenAI remains “a very strong partnership,” more than a decade old. OpenAI, she said, considers Nvidia its “preferred partner” for compute. But she added that Nvidia’s current sales outlook does not rely on the new megadeal.
The roughly $500 billion of Blackwell and Vera Rubin system demand Nvidia has guided for 2025–26 “doesn’t include any of the work we’re doing right now on the next part of the agreement with OpenAI,” she said. For now, OpenAI’s purchases flow indirectly through cloud partners like Microsoft and Oracle rather than through the new direct arrangement laid out in the letter of intent.
OpenAI “does want to go direct,” Kress said. “But again, we’re still working on a definitive agreement.”
Nvidia insists the moat is intact
On competitive dynamics, Kress was unequivocal. Markets lately have been cheering Google’s TPU—which has a smaller use case than the GPU but requires less power—as a potential competitor to Nvidia’s GPU. Asked whether those types of chips, called ASICs, are narrowing Nvidia’s lead, she responded: “Absolutely not.”
“Our focus right now is helping all different model builders, but also helping so many enterprises with a full stack,” she said. Nvidia’s defensive moat, she argued, isn’t any individual chip but the entire platform: hardware, CUDA, and a constantly expanding library of industry-specific software. That stack, she said, is why older architectures remain heavily used even as Blackwell becomes the new standard.
“Everybody is on our platform,” Kress said. “All models are on our platform, both in the cloud as well as on-prem.”