
2025年12月9日,美國總統唐納德·特朗普宣布,將允許英偉達(Nvidia)H200處理器對華出口,但所有銷售額的25%必須上繳給美國政府。這一決定在美國政界引發震動,包括參議員伊麗莎白·沃倫在內的多位人士都指責特朗普“出賣”國家安全。
在全球AI領域,此類零和博弈或競爭性敘事屢見不鮮。例如,Anthropic一方面在國內反復強調AI安全的重要性,而其聯合創始人兼首席執行官達里奧·阿莫代在國際上卻不斷強化“軍備競賽”的敘事,主張通過出口管制來減緩中國的發展速度,以確保美國贏得AI競賽。《芯片戰爭》(Chip War)一書的作者克里斯·米勒也持有相似觀點。他認為,美國的芯片出口管制措施,例如禁止向中國出售英偉達H100等最先進的GPU,已經“取得成功……顯著放緩了中國芯片制造能力的增長”。特朗普本人也在去年7月宣稱,美國率先開啟了AI競賽,并將贏得勝利。
這些論調暗示,兩大強國正在進行一場雙人對決:一方獲勝,則另一方必敗,贏家將以輸家的損失為代價獲得重大收益。然而,從理性選擇的角度來看,“AI競賽”這種說法其實并不恰當。雙人對決通常具備以下特征:存在一種雙方無法共享的競爭性資源,同時該資源又具有非排他性,即任何一方都難以阻止另一方使用。在這種情況下,市場參與者競爭的是“誰能率先獲得該資源”。
在1955年的電影《無因的反叛》(Rebel Without a Cause)中,吉姆·斯塔克(詹姆斯·迪恩飾)與宿敵巴茲(科里·艾倫飾)駕車沖向懸崖。若兩人都直行,則同歸于盡;先轉向者告負。如果一人轉向、另一人繼續沖向懸崖邊緣,那么任何一方都無法通過改變策略來改善自身處境——這正是所謂的“納什均衡”(Nash Equilibrium)。這種結果是非合作性的:如果一方轉向,另一方應繼續向前沖;但如果一方改為向前沖,另一方就應當轉向。
然而,地緣政治語境下的AI生態并非如此。AI模型的使用具有排他性,例如去年薩姆·奧爾特曼決定禁止中國用戶使用OpenAI的ChatGPT,但這種使用并不完全具有競爭性(例如,DeepSeek的模型以開源方式發布,任何人都可以在本地運行)。模型的具體部署在某種程度上具有競爭性,因為每增加一名邊際用戶都會帶來能源和數據成本;但奧爾特曼的決定并非出于這方面的考慮。他之所以排除中國用戶,是因為他認為美國不應與中國合作。
因此,或許有人主張,向中國出售芯片會助長中國的實力,從而使美國的處境更糟。但這種觀點忽視了另一方面的好處,即美國普通中產家庭能以更低價格獲取尖端電子產品;同時,全球對美國科技體系的依賴,也為美國提供了重要的戰略杠桿。
一些經濟學家將這種以非競爭性但具有排他性的資源為特征的情境,稱為“獵鹿博弈”。這一概念源自哲學家讓-雅克·盧梭在《論人類不平等的起源》(A Discourse on Inequality)中的一個寓言。設想有一群獵人,他們可以選擇合作獵捕大型獵物(鹿),也可以單獨獵捕小型獵物(兔子)。問題在于:只有合作才能捕到鹿;而每個人都能單獨捕捉兔子。在這種博弈中存在兩個納什均衡:要么合作獵鹿,要么單獨捕兔。但前者明顯優于后者,所以他們應該合作獵鹿。
全球AI競爭與其說是一場競賽,不如說更接近一場“獵鹿博弈”。無論是在政策、治理還是貿易層面,國家之間的合作所能帶來的收益,往往高于各自為戰。相反,一旦溝通破裂便會滋生猜疑,并可能引發有害的誤判,例如因為高估對方帶來的威脅而陷入不斷升級的惡性循環,或在沖突中魯莽地部署AI技術。由此可見,在中美AI博弈中的那只“鹿”,在很大程度上體現在雙方共同防范此類失誤,以及通過互利共贏的商業化發展,讓AI造福更廣泛的公眾。
從AI帶來的操縱、欺騙與脅迫風險,到其在勞動力市場應用所引發的就業替代問題,中國、美國以及世界各國面臨著大量共同挑戰。要實現互利合作,需要信任、透明與協作,而不是反復無常的政治化操作。唯有如此,才能從“各自捕兔”,走向“合力獵鹿”。
為實現這一目標,政策制定者必須著力構建有效的多邊AI治理機制,包括建立并監督爭端解決機制。同時,中等國家憑借各自獨特的優勢,通過非常規結盟也可以形成新的談判籌碼。
例如,能源資源豐富的沙特阿拉伯正在努力躋身全球第三大AI市場;法國和以色列的領先企業則誓言主導專業化AI應用領域;而印度憑借龐大的人口以及對教育的日益重視,正在逐步成為工程與計算機科學人才的主要輸出國之一。
國際秩序正在走向多極化,AI領域亦不例外。與其不惜一切代價試圖在所謂的“AI競賽”中擊敗對手,中國和美國更應搭建橋梁,在朋友與對手之間尋求共識。
本文作者鮑里斯·巴比奇現任香港大學數據科學、哲學與法學副教授;黃裕舜為香港大學哲學助理教授及當代中國與世界研究中心研究員。
本文改編自作者即將于2026年出版的新書《人工智能的地緣政治》(Geopolitics of Artificial Intelligence),該書屬于劍橋大學出版社(Cambridge University Press)的“Elements”系列叢書。
Fortune.com上發表的評論文章中表達的觀點,僅代表作者本人的觀點,不代表《財富》雜志的觀點和立場。(財富中文網)
譯者:劉進龍
審校:汪皓
2025年12月9日,美國總統唐納德·特朗普宣布,將允許英偉達(Nvidia)H200處理器對華出口,但所有銷售額的25%必須上繳給美國政府。這一決定在美國政界引發震動,包括參議員伊麗莎白·沃倫在內的多位人士都指責特朗普“出賣”國家安全。
在全球AI領域,此類零和博弈或競爭性敘事屢見不鮮。例如,Anthropic一方面在國內反復強調AI安全的重要性,而其聯合創始人兼首席執行官達里奧·阿莫代在國際上卻不斷強化“軍備競賽”的敘事,主張通過出口管制來減緩中國的發展速度,以確保美國贏得AI競賽。《芯片戰爭》(Chip War)一書的作者克里斯·米勒也持有相似觀點。他認為,美國的芯片出口管制措施,例如禁止向中國出售英偉達H100等最先進的GPU,已經“取得成功……顯著放緩了中國芯片制造能力的增長”。特朗普本人也在去年7月宣稱,美國率先開啟了AI競賽,并將贏得勝利。
這些論調暗示,兩大強國正在進行一場雙人對決:一方獲勝,則另一方必敗,贏家將以輸家的損失為代價獲得重大收益。然而,從理性選擇的角度來看,“AI競賽”這種說法其實并不恰當。雙人對決通常具備以下特征:存在一種雙方無法共享的競爭性資源,同時該資源又具有非排他性,即任何一方都難以阻止另一方使用。在這種情況下,市場參與者競爭的是“誰能率先獲得該資源”。
在1955年的電影《無因的反叛》(Rebel Without a Cause)中,吉姆·斯塔克(詹姆斯·迪恩飾)與宿敵巴茲(科里·艾倫飾)駕車沖向懸崖。若兩人都直行,則同歸于盡;先轉向者告負。如果一人轉向、另一人繼續沖向懸崖邊緣,那么任何一方都無法通過改變策略來改善自身處境——這正是所謂的“納什均衡”(Nash Equilibrium)。這種結果是非合作性的:如果一方轉向,另一方應繼續向前沖;但如果一方改為向前沖,另一方就應當轉向。
然而,地緣政治語境下的AI生態并非如此。AI模型的使用具有排他性,例如去年薩姆·奧爾特曼決定禁止中國用戶使用OpenAI的ChatGPT,但這種使用并不完全具有競爭性(例如,DeepSeek的模型以開源方式發布,任何人都可以在本地運行)。模型的具體部署在某種程度上具有競爭性,因為每增加一名邊際用戶都會帶來能源和數據成本;但奧爾特曼的決定并非出于這方面的考慮。他之所以排除中國用戶,是因為他認為美國不應與中國合作。
因此,或許有人主張,向中國出售芯片會助長中國的實力,從而使美國的處境更糟。但這種觀點忽視了另一方面的好處,即美國普通中產家庭能以更低價格獲取尖端電子產品;同時,全球對美國科技體系的依賴,也為美國提供了重要的戰略杠桿。
一些經濟學家將這種以非競爭性但具有排他性的資源為特征的情境,稱為“獵鹿博弈”。這一概念源自哲學家讓-雅克·盧梭在《論人類不平等的起源》(A Discourse on Inequality)中的一個寓言。設想有一群獵人,他們可以選擇合作獵捕大型獵物(鹿),也可以單獨獵捕小型獵物(兔子)。問題在于:只有合作才能捕到鹿;而每個人都能單獨捕捉兔子。在這種博弈中存在兩個納什均衡:要么合作獵鹿,要么單獨捕兔。但前者明顯優于后者,所以他們應該合作獵鹿。
全球AI競爭與其說是一場競賽,不如說更接近一場“獵鹿博弈”。無論是在政策、治理還是貿易層面,國家之間的合作所能帶來的收益,往往高于各自為戰。相反,一旦溝通破裂便會滋生猜疑,并可能引發有害的誤判,例如因為高估對方帶來的威脅而陷入不斷升級的惡性循環,或在沖突中魯莽地部署AI技術。由此可見,在中美AI博弈中的那只“鹿”,在很大程度上體現在雙方共同防范此類失誤,以及通過互利共贏的商業化發展,讓AI造福更廣泛的公眾。
從AI帶來的操縱、欺騙與脅迫風險,到其在勞動力市場應用所引發的就業替代問題,中國、美國以及世界各國面臨著大量共同挑戰。要實現互利合作,需要信任、透明與協作,而不是反復無常的政治化操作。唯有如此,才能從“各自捕兔”,走向“合力獵鹿”。
為實現這一目標,政策制定者必須著力構建有效的多邊AI治理機制,包括建立并監督爭端解決機制。同時,中等國家憑借各自獨特的優勢,通過非常規結盟也可以形成新的談判籌碼。
例如,能源資源豐富的沙特阿拉伯正在努力躋身全球第三大AI市場;法國和以色列的領先企業則誓言主導專業化AI應用領域;而印度憑借龐大的人口以及對教育的日益重視,正在逐步成為工程與計算機科學人才的主要輸出國之一。
國際秩序正在走向多極化,AI領域亦不例外。與其不惜一切代價試圖在所謂的“AI競賽”中擊敗對手,中國和美國更應搭建橋梁,在朋友與對手之間尋求共識。
本文作者鮑里斯·巴比奇現任香港大學數據科學、哲學與法學副教授;黃裕舜為香港大學哲學助理教授及當代中國與世界研究中心研究員。
本文改編自作者即將于2026年出版的新書《人工智能的地緣政治》(Geopolitics of Artificial Intelligence),該書屬于劍橋大學出版社(Cambridge University Press)的“Elements”系列叢書。
Fortune.com上發表的評論文章中表達的觀點,僅代表作者本人的觀點,不代表《財富》雜志的觀點和立場。(財富中文網)
譯者:劉進龍
審校:汪皓
On Dec. 9th, U.S. President Donald Trump announced that the U.S. would allow Nvidia’s H200 processors to be exported to China, subject to a 25% fee on all sales. The move has sent ripples through the American establishment, with many (including Senator Elizabeth Warren) charging that Trump is “selling out” national security.
There is no shortage of such zero-sum or competitive framing when it comes to the global AI space. Indeed, while Anthropic has emphasized AI safety at home, the company’s co-founder and CEO, Dario Amodei, has stoked a narrative of an arms race abroad, arguing that export controls are essential to slow down China’s development and ensure that the U.S. wins the AI race. Similarly, Chip War author Chris Miller argues that the U.S. chip export controls, such as the prohibition on the sale to China of the most advanced GPUs like the NVIDIA H100s, have “succeeded … [by] significantly slow[ing] the growth of China’s chipmaking capability”. Indeed, Trump himself declared in July that America started the AI race, and it will win it.
Such arguments suggest that the two great powers are engaged in a two-player race—that one of them will win and the other will lose—and that the winner will obtain significant benefits at the expense of the loser. Yet from a rational choice perspective, the “AI race” is a misnomer. A two-party race typically involves an environment characterized by a rivalrous resource (which cannot be enjoyed by both parties) that is non-excludable (neither player can easily prevent the other from using it), and the players compete over who will be the first to that resource.
In the 1955 film, Rebel Without a Cause, Jim Stark (James Dean) races toward a cliff against his nemesis Buzz (Corey Allen). If both teenagers drive straight, they both die. The one who swerves first loses. If one driver swerves and the other continues racing to the cliff’s edge, neither can improve his position by changing strategy—we call this a Nash Equilibrium. This outcome is non-cooperative: If one swerves, the other should race; but if one switches to racing, the other should swerve.
The geopolitical AI ecosystem is not like this. The use of AI models is excludable—indeed, last year Sam Altman decided to exclude Chinese users from OpenAI’s GPT—but such use is not strictly rivalrous (DeepSeek’s models are released under open-source licenses and can be run locally by anyone). A model’s implementations are arguably rivalrous, in that the marginal user imposes an energy/data cost, but that was not the motivating concern for Altman’s decision: He excluded Chinese users because he believed that the U.S. should not cooperate with China.
So perhaps the argument is that selling chips to China would embolden Beijing and render the U.S. worse off. Yet this ignores the benefits accrued to ordinary U.S. middle-class households through greater access to leading electronics at lower prices, or the volume of leverage afforded through global dependence on the American tech landscape.
Some economists refer to a situation characterized by non-rivalrous but excludable resources, instead of rivalrous but non-excludable resources, as a “stag hunt”, drawing upon a parable in philosopher Jean-Jacques Rousseau’s A Discourse on Inequality. Consider a group of hunters who can choose to hunt a large prey together (the stag), or a small prey alone (the rabbit). The trick is that they can only catch the stag if they cooperate while everyone can hunt a rabbit on their own. This game has two Nash equilibria: Either we work together to hunt the stag, or we each work alone to catch a single rabbit. Yet one of these equilibria is better than the other: We should work together to hunt the stag.
Global AI competition looks more like a stag hunt than it does like a race. Whether in policy, governance, or trade, cooperation between countries can yield greater benefits than working alone. In contrast, a breakdown in communication breeds mistrust, which could give rise to harmful mistakes, such as an escalatory spiral from overestimating the threat posed by the other side, or a reckless deployment of AI in conflicts. The “stag” in the U.S.-China AI game, therefore, lies in part with the mutual prevention of such mistakes and the gains from mutually advantageous commercial development of AI for the benefit of the wider public.
There exist plenty of common challenges that China, the U.S., and the world must confront, from AI manipulation, deception, and coercion, to the displacement of labor brought about by AI’s implementation in the workforce. Such mutually beneficial cooperation requires trust, transparency, and cooperation, as opposed to erratic politicization—this is how we move from hunting the rabbit, to hunting the stag.
To get there, policymakers must seek to cultivate effective multilateral AI governance institutions, including establishing and monitoring dispute resolution mechanisms. Bargaining capital also arises through unconventional alignments of medium-size powers, each with their distinctive niches.
For instance, energy-rich Saudi Arabia is striving to become the third largest AI market in the world, while leading players in France and Israel are pledging to lead in specialized AI applications. With its immense population and growing emphasis upon education, India is shaping to be among the primary suppliers of engineering and computer science talent.
The international order is becoming more multi-polar, and the AI world is no exception. Instead of trying to “win the AI race” at any cost against its rival, both the U.S. and China should build bridges and seek common ground with friends and rivals alike.
Boris Babic is an associate professor of data science, philosophy and law at the University of Hong Kong. Brian Wong is an assistant professor of philosophy and a fellow at the Centre on Contemporary China and the World at the University of Hong Kong.
This essay is adapted from the authors’ forthcoming book, Geopolitics of Artificial Intelligence, to be published in 2026 by Cambridge University Press as part of its Elements series.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.