
在當(dāng)下這場席卷全球的AI熱潮中,鮮有故事比利奧波德·阿申布倫納的經(jīng)歷更引人注目。
這位23歲年輕人的職業(yè)生涯開局并不順利:他曾在薩姆·班克曼-弗里德現(xiàn)已破產(chǎn)的FTX加密貨幣交易所的慈善部門工作,之后在AI領(lǐng)域最具影響力的公司之一OpenAI度過了頗具爭議的一年,并最終被解雇。然而,在被該公司辭退僅兩個(gè)月后,阿申布倫納撰寫了一份AI宣言,引發(fā)網(wǎng)絡(luò)熱議,美國總統(tǒng)唐納德·特朗普的女兒伊萬卡甚至都在社交媒體上為其點(diǎn)贊。隨后,他以這份宣言作為跳板,創(chuàng)立了一家對沖基金,如今該基金的資產(chǎn)管理規(guī)模超過15億美元。按照對沖基金的標(biāo)準(zhǔn),這家基金規(guī)模一般,但對于一個(gè)剛剛大學(xué)畢業(yè)的年輕人而言卻堪稱傳奇。從哥倫比亞大學(xué)(Columbia)畢業(yè)僅四年,阿申布倫納就已經(jīng)能與科技公司CEO、投資者和政策制定者私下里侃侃而談,被他們視為AI時(shí)代的“先知”。
阿申布倫納的崛起令人瞠目,這也讓許多人不僅好奇這位出生于德國的年輕AI研究員的成功秘訣,更質(zhì)疑圍繞他的熱度是否名副其實(shí)。在一些人眼中,阿申布倫納堪稱罕見的天才。他比任何人都更清晰地洞察到時(shí)代的關(guān)鍵節(jié)點(diǎn):類人通用人工智能(AGI)的到來、中國在AI競賽中的加速崛起,以及先行者將收獲巨額財(cái)富等。但在另一些人看來,包括幾位OpenAI前同事,他不過是個(gè)幸運(yùn)的新手,沒有任何金融從業(yè)記錄,只是將一場AI的狂潮重新包裝成一份對沖基金的募資說辭。
阿申布倫納的迅速崛起,展現(xiàn)了硅谷如何將時(shí)代精神轉(zhuǎn)化為資本,以及資本又如何進(jìn)一步轉(zhuǎn)化為影響力。批評者質(zhì)疑,他創(chuàng)立對沖基金只是為了將那些真?zhèn)坞y辨的科技預(yù)言變現(xiàn)獲利;但他的朋友們則有著不同的解讀。如Anthropic研究員肖爾托·道格拉斯將此視為一種“變革理論”。道格拉斯解釋說,阿申布倫納正在利用這家對沖基金在金融體系中樹立起可信的發(fā)言地位:“他的意思是——‘我對世界未來的發(fā)展方向有著極強(qiáng)的信念,而我正在以真金白銀來驗(yàn)證這一信念。’”
但這也引出了一個(gè)耐人尋味的問題:為何如此多人愿意信任這位初出茅廬的新秀?
答案并不簡單。根據(jù)《財(cái)富》記者對十余位阿申布倫納的朋友、前同事、熟人,以及多位投資人和硅谷業(yè)內(nèi)人士的采訪,一個(gè)主題反復(fù)出現(xiàn):阿申布倫納善于捕捉那些在硅谷實(shí)驗(yàn)室中逐漸積聚勢能的理念,并將它們整合成一套連貫而令人信服的敘事體系。對那些風(fēng)險(xiǎn)偏好旺盛的投資者而言,這套敘事就像一份精心搭配的“招牌特餐”,讓人難以拒絕。
阿申布倫納拒絕就本文置評。多位消息人士因擔(dān)憂談?wù)撛贏I圈內(nèi)具有巨大權(quán)力與影響力的人物可能帶來風(fēng)險(xiǎn),要求匿名接受采訪。
許多人在談及阿申布倫納時(shí),語氣中既帶著欽佩,也流露出謹(jǐn)慎。“極度專注”、“極其聰明”、“莽撞”、“自信”是他們常用的形容詞。不止一人形容他身上帶有“神童氣質(zhì)”,正是硅谷歷來樂于加冕的那類人物。也有人指出,他的思想并非特別新穎,只是包裝巧妙、時(shí)機(jī)得當(dāng)。而在批評者看來,他更像是一個(gè)制造熱度的高手,而非洞察未來的思想家。但《財(cái)富》雜志采訪的多位投資人卻持不同意見,他們認(rèn)為,阿申布倫納的文章與早期投資布局展現(xiàn)出非凡的遠(yuǎn)見卓識。
然而,可以肯定的是,阿申布倫納的崛起并非偶然,而是多重力量交匯的產(chǎn)物:全球資本正競相涌入AI賽道;硅谷則癡迷于實(shí)現(xiàn)“通用人工智能”,即能與人類智慧相媲美、甚至超越人類的AI;與此同時(shí),地緣政治格局也將AI的發(fā)展描繪成一場與中國之間的科技軍備競賽。

勾勒未來
在AI領(lǐng)域的某些小圈子里,利奧波德·阿申布倫納早已小有名氣。早在加入OpenAI之前,他就撰寫過多篇在AI安全領(lǐng)域內(nèi)部流傳的博客、論文和研究文章。但對大多數(shù)人而言,他幾乎是在一夜之間嶄露頭角。那是在2024年6月,他在網(wǎng)上自費(fèi)發(fā)表了一篇長達(dá)165頁的專著——《態(tài)勢感知:未來十年》(Situational Awareness: The Decade Ahead)。這篇長文的標(biāo)題借用了AI圈內(nèi)早已熟知的一個(gè)術(shù)語——“態(tài)勢感知”,它通常指AI模型開始意識到自身處境,這被視為一種安全風(fēng)險(xiǎn)。而阿申布倫納卻賦予了這一概念全新的含義:他主張政府與投資者必須正視AGI即將到來的速度,以及一旦美國落后所面臨的巨大風(fēng)險(xiǎn)。
從某種意義上說,阿申布倫納希望這份宣言能成為AI時(shí)代的“長電報(bào)”,就像當(dāng)年美國外交官、蘇聯(lián)問題專家喬治·凱南在那封著名電報(bào)中所做的那樣,喚醒美國精英階層對歐洲面臨蘇聯(lián)威脅的警覺。在這篇宣言的引言中,阿申布倫納描繪了一個(gè)他聲稱只有少數(shù)幾百名“有先見之明”的人才能看見的未來,“其中大多數(shù)人在舊金山及各大AI實(shí)驗(yàn)室”。毫不意外,他也將自己歸入擁有“態(tài)勢感知”的那類人;而世界其他地方的人們“對即將到來的沖擊毫無察覺”。在大多數(shù)人眼中,AI要么只是炒作,要么充其量是又一次互聯(lián)網(wǎng)級別的技術(shù)變革。而阿申布倫納堅(jiān)稱自己看得更為清晰:大語言模型正以指數(shù)級速度進(jìn)化,快速邁向AGI,并最終超越人類智慧,進(jìn)入“超級智能”時(shí)代——這一進(jìn)程將帶來深遠(yuǎn)的地緣政治影響,同時(shí)也為那些先行者帶來本世紀(jì)最大規(guī)模的經(jīng)濟(jì)紅利。
為了強(qiáng)調(diào)這一點(diǎn),阿申布倫納援引了2020年初新冠疫情的例子——他認(rèn)為,當(dāng)時(shí)只有極少數(shù)人真正理解到疫情指數(shù)級傳播的含義,意識到隨之而來的經(jīng)濟(jì)沖擊的規(guī)模,并在市場崩盤前通過做空獲利。他寫道:“我所能做的只是買口罩并做空市場。”同樣地,他強(qiáng)調(diào)如今也只有極少數(shù)人真正明白AGI到來的速度,而那些先行者將有機(jī)會獲得歷史性收益。而他再次將自己列入有先見之明的少數(shù)人之一。
不過,《態(tài)勢感知》的核心論點(diǎn)并非與新冠疫情的類比,而是數(shù)學(xué)本身揭示了未來的發(fā)展方向:擴(kuò)展曲線顯示,只要在相同的基礎(chǔ)算法上不斷增加數(shù)據(jù)量與算力,AI的能力就會呈指數(shù)級提升。
道格拉斯現(xiàn)任Anthropic強(qiáng)化學(xué)習(xí)擴(kuò)展項(xiàng)目的技術(shù)負(fù)責(zé)人。他是阿申布倫納的朋友兼前室友,曾多次與他討論這篇專著的內(nèi)容。他在接受《財(cái)富》雜志采訪時(shí)表示,這篇文章將許多AI研究人員長期以來的感受凝聚為清晰的論述。道格拉斯表示:“如果我們相信這條趨勢線會持續(xù)下去,那結(jié)局可能會相當(dāng)瘋狂。”與那些專注于每次連續(xù)模型發(fā)布漸進(jìn)式改進(jìn)的人不同,阿申布倫納愿意“真正押注在指數(shù)級增長上”。
一篇走紅的文章
關(guān)于AI風(fēng)險(xiǎn)與戰(zhàn)略,每年都有大量篇幅冗長、內(nèi)容晦澀的論文在圈子中流傳,但大多數(shù)論文只是在少數(shù)小眾論壇上引發(fā)短暫爭論后便銷聲匿跡,比如由AI理論家、知名“末日論者”埃利澤·尤德科夫斯基創(chuàng)辦的網(wǎng)站LessWrong,這里已成為理性主義與AI安全思想的聚集地。
但《態(tài)勢感知》帶來的沖擊卻截然不同。曾在OpenAI與阿申布倫納共事兩年的德克薩斯大學(xué)奧斯汀分校(UT Austin)計(jì)算機(jī)科學(xué)教授斯科特·亞倫森回憶起自己最初的反應(yīng)是:“天哪,又來一篇這種文章。”但讀完后,他對《財(cái)富》雜志表示:“我感覺這篇文章會讓某位將軍或國家安全官員讀到后下令:‘這件事必須立刻采取行動。’”他隨后在博客中寫道,這篇文章是“我讀過的最非凡的作品之一”。亞倫森指出,阿申布倫納提出的論點(diǎn)是:“即便經(jīng)歷了ChatGPT及其之后的一切,世界仍遠(yuǎn)未真正意識到即將到來的沖擊。”
一位長期從事AI治理研究的專家稱這篇文章是“一項(xiàng)了不起的成果”,但同時(shí)強(qiáng)調(diào)其中的觀點(diǎn)并不新鮮:“他基本上是把前沿AI實(shí)驗(yàn)室內(nèi)部早已形成的共識,重新整理成一份包裝精良、極具說服力且易于理解的作品。”其結(jié)果是,在全球AI討論達(dá)到白熱化的時(shí)刻,將這種原本只在業(yè)內(nèi)流通的思維,推向更廣泛的公眾視野。
在AI安全研究員群體中,這篇文章引發(fā)了更為激烈的分歧。對這些主要關(guān)注AI可能對人類構(gòu)成生存威脅的研究員而言,阿申布倫納的作品更像是一種“背叛”,尤其是因?yàn)樗救苏浅錾碛谶@一圈子。許多人認(rèn)為,他們原本呼吁審慎與監(jiān)管的論點(diǎn),被他改造成了一份面向投資者的推銷文案。一位前OpenAI治理研究員說道:“一些極度擔(dān)憂[生存風(fēng)險(xiǎn)]的人現(xiàn)在非常反感利奧波德,因?yàn)樗麄冇X得他出賣了理想。”也有人認(rèn)同他的大部分預(yù)測,并認(rèn)為他將這些預(yù)測傳播給更廣泛受眾是有價(jià)值的。
但即便是批評者也不得不承認(rèn),阿申布倫納在敘事包裝與傳播方面有非凡的天賦。另一位前OpenAI研究員表示:“他非常擅長把握時(shí)代脈搏,知道人們關(guān)心什么、什么內(nèi)容容易引發(fā)熱議。這就是他的超能力。他懂得如何通過塑造一種迎合當(dāng)下情緒的敘事,來吸引有權(quán)勢者的注意,比如美國必須更加重視AI安全等。即便細(xì)節(jié)未必準(zhǔn)確,時(shí)機(jī)卻拿捏得恰到好處。”
正因把握住了時(shí)機(jī),這篇文章幾乎令人無法忽視。科技創(chuàng)始人和投資者們以處理熱門投資意向書的緊迫感,爭相轉(zhuǎn)發(fā)《態(tài)勢感知》;而政策制定者與國家安全官員則像傳閱一份最刺激的NSA機(jī)密情報(bào)評估那樣,在內(nèi)部廣泛流傳。
正如一位現(xiàn)任OpenAI員工所言,阿申布倫納的真正本領(lǐng)在于“他總能預(yù)判冰球?qū)⒁蚰睦铩!?/p>
宏大敘事與資本運(yùn)作的結(jié)合
就在文章發(fā)布的同時(shí),阿申布倫納創(chuàng)立了以“態(tài)勢感知”(Situational Awareness LP)為名的對沖基金。這家圍繞AGI主題構(gòu)建的對沖基金,主要投資上市公司而非私營初創(chuàng)企業(yè)。
該基金的初始資金來自多位硅谷重量級人物,包括投資人、現(xiàn)任Meta AI產(chǎn)品負(fù)責(zé)人納特·弗里德曼。據(jù)報(bào)道,阿申布倫納在2023年弗里德曼讀過他的一篇博客后與他取得聯(lián)系。此外還有弗里德曼的投資合伙人丹尼爾·格羅斯以及Stripe聯(lián)合創(chuàng)始人帕特里克·科里森和約翰·科里森。據(jù)稱帕特里克·科里森早在2021年由一位關(guān)系人安排的私人晚宴上與阿申布倫納相識,晚宴的目的是讓他們交流共同感興趣的話題。阿申布倫納還邀請45歲的AI預(yù)測與治理研究員卡爾·舒爾曼擔(dān)任基金研究主管。舒爾曼在AI安全領(lǐng)域人脈深厚,曾在彼得·泰爾旗下的Clarium Capital工作。
在配合基金發(fā)布的一檔時(shí)長四小時(shí)的播客中(對談?wù)邽榈峦呖耸病づ撂貭枺⑸瓴紓惣{強(qiáng)調(diào)了他所預(yù)測的AGI到來后的爆炸式增長。他表示“之后的十年同樣會極其瘋狂”,屆時(shí)“資本將變得至關(guān)重要”。他表示,如果運(yùn)作得當(dāng),“將有豐厚的回報(bào)。如果市場從明天起完全消化AGI的預(yù)期,你也許能獲得百倍回報(bào)。”
這份宣言與基金相輔相成:一邊是一部篇幅堪比專著的投資論綱,另一邊則是一位信念堅(jiān)定、愿意以真金白銀下注的預(yù)言家。事實(shí)證明,這樣的組合對某類投資者而言幾乎具有無法抗拒的吸引力。一位前OpenAI研究員指出,弗里德曼素以“把握時(shí)代脈搏”著稱,他善于支持那些能精準(zhǔn)捕捉當(dāng)下氛圍、并將其轉(zhuǎn)化為影響力的人。而支持阿申布倫納,正完美契合了這一投資邏輯。
態(tài)勢感知基金的投資策略相當(dāng)直接:押注那些有可能從AI浪潮中受益的全球股票,涵蓋半導(dǎo)體、基礎(chǔ)設(shè)施與電力公司,同時(shí)做空可能在AI時(shí)代落后的行業(yè)來對沖風(fēng)險(xiǎn)。公開文件揭示了該基金的部分持倉情況:今年6月提交給美國證券交易委員會(SEC)的文件顯示,該基金持有英特爾(Intel)、博通(Broadcom)、Vistra以及前比特幣礦企Core Scientific等美國公司的股份。值得注意的是,CoreWeave已于7月宣布將收購Core Scientific。上述企業(yè)都被視為AI基礎(chǔ)設(shè)施建設(shè)的主要受益方。到目前為止,這一投資布局已帶來回報(bào)。該基金的資產(chǎn)規(guī)模迅速擴(kuò)大至超過15億美元,并在今年上半年實(shí)現(xiàn)了47%的凈收益(扣除費(fèi)用后)。

態(tài)勢感知基金的發(fā)言人表示,投資者來自全球各地,包括美國西海岸的創(chuàng)業(yè)者、家族理財(cái)辦公室、機(jī)構(gòu)投資者及捐贈基金等。發(fā)言人同時(shí)透露,阿申布倫納“幾乎將自己的全部凈資產(chǎn)都投入了這只基金”。
當(dāng)然,任何關(guān)于美國對沖基金持倉情況的公開信息都并不完整。公開披露的13F文件僅涵蓋在美國上市股票中的多頭頭寸,而空頭、衍生品及海外投資等均無需披露,這也給該基金的真實(shí)投資動向增添了一層神秘色彩。盡管如此,一些觀察人士質(zhì)疑,阿申布倫納的早期業(yè)績究竟源于投資能力,還是純屬時(shí)機(jī)巧合。例如,該基金在第一季度的申報(bào)文件中披露持有約4.59億美元的英特爾看漲期權(quán)。在隨后幾個(gè)月,英特爾股價(jià)在獲得聯(lián)邦投資及英偉達(dá)(Nvidia)追加50億美元入股后大幅上漲,這筆持倉顯得“頗有先見之明”。
但至少部分經(jīng)驗(yàn)豐富的金融界人士對他的看法已經(jīng)有所不同。資深對沖基金投資者格雷厄姆·鄧肯以個(gè)人身份投資了態(tài)勢感知基金,現(xiàn)任該基金顧問。鄧肯表示,阿申布倫納兼具業(yè)內(nèi)洞察與大膽投資策略的組合給他留下了深刻印象。鄧肯表示:“我覺得他的文章非常有啟發(fā)性。”他補(bǔ)充道,阿申布倫納與舒爾曼并非那種在外圍尋找機(jī)會的局外人,而是基于自身認(rèn)知親手投資體系的圈內(nèi)人。這家基金的投資邏輯讓他想起那些在次貸危機(jī)爆發(fā)前就察覺到風(fēng)險(xiǎn)的少數(shù)“逆勢投資者”,如因?yàn)楸贿~克爾·劉易斯寫進(jìn)《大空頭》(The Big Short)而名聲大噪的邁克爾·伯里。鄧肯表示:“如果你想擁有差異性認(rèn)知,保持一點(diǎn)與眾不同的心態(tài)會有所幫助。”
鄧肯舉了一個(gè)例子:今年1月,中國初創(chuàng)公司深度求索(DeepSeek)發(fā)布開源大語言模型R1,盡管中國仍面臨資金限制和出口管制等方面的制約,但這一事件被許多人稱作中國AI崛起的“斯普特尼克時(shí)刻”。鄧肯表示,當(dāng)多數(shù)投資者因這則消息陷入恐慌時(shí),阿申布倫納與舒爾曼早就開始密切關(guān)注這一動態(tài),并認(rèn)為市場拋售是過度反應(yīng)。兩人選擇逆勢買入,而非跟風(fēng)賣出。有報(bào)道稱,當(dāng)時(shí)甚至有一家大型科技基金在分析師一句“利奧波德說沒問題”的建議后,放棄了清倉決定。鄧肯表示,那一刻奠定了阿申布倫納在業(yè)內(nèi)的信譽(yù),盡管他也坦言:“他仍有可能被證明是錯(cuò)誤的。”
態(tài)勢感知基金的另一位投資者本人管理著一家領(lǐng)先的對沖基金。他對《財(cái)富》雜志表示,當(dāng)他問阿申布倫納為何創(chuàng)立專注于AI的對沖基金而非風(fēng)險(xiǎn)投資基金(這似乎是最顯而易見的選擇)時(shí),阿申布倫納的回答讓他印象深刻。
這位投資者表示:“他說,AGI將對全球經(jīng)濟(jì)產(chǎn)生深遠(yuǎn)影響,而充分利用它的唯一方式就是在全球流動性最強(qiáng)的市場中表達(dá)投資觀點(diǎn)。他們學(xué)習(xí)曲線的攀升速度之快讓我感到震驚……在公開市場中,他們對AI投資的理解遠(yuǎn)比我接觸過的任何團(tuán)隊(duì)都要成熟。”
從哥倫比亞大學(xué)“神童”到FTX與OpenAI
阿申布倫納出生于德國,父母均為醫(yī)生。他15歲進(jìn)入哥倫比亞大學(xué)就讀,19歲以最佳畢業(yè)生的身份畢業(yè)。一位長期從事AI治理研究、并自稱與阿申布倫納相識的研究員回憶說,她在阿申布倫納讀本科時(shí)第一次聽說過他的名字。
她表示:“我當(dāng)時(shí)聽人提起他說:‘哦,我們聽說過這個(gè)叫利奧波德·阿申布倫納的年輕人,他好像很聰明。’感覺他就是一位神童。"
這種“天才少年的”聲譽(yù)此后愈發(fā)鞏固。17歲時(shí),阿申布倫納獲得經(jīng)濟(jì)學(xué)家泰勒·科文創(chuàng)辦的新興風(fēng)險(xiǎn)基金(Emergent Ventures)資助,科文稱他為“經(jīng)濟(jì)學(xué)奇才”。在哥倫比亞大學(xué)就讀期間,他還在全球優(yōu)先研究院(Global Priorities Institute)實(shí)習(xí),與經(jīng)濟(jì)學(xué)家菲利普·特拉梅爾合著論文,并為Stripe資助的出版物《Works in Progress》撰寫文章,這使他在科技與知識界進(jìn)一步建立了立足點(diǎn)。
那時(shí)的阿申布倫納,已深度融入“有效利他主義”( Effective Altruism)社群——這是一個(gè)以哲學(xué)為導(dǎo)向、在AI安全領(lǐng)域頗具影響力、具有爭議的運(yùn)動。他還共同創(chuàng)立了哥倫比亞大學(xué)的有效利他主義分會。正是這一人脈網(wǎng)絡(luò),最終讓他進(jìn)入了FTX未來基金(FTX Future Fund)。該慈善機(jī)構(gòu)由加密貨幣交易所創(chuàng)始人山姆·班克曼-弗里德創(chuàng)立。班克曼-弗里德同樣是有效利他主義的擁護(hù)者,他曾向包括AI治理研究在內(nèi)的多個(gè)與有效利他主義慈善目標(biāo)一致的領(lǐng)域捐贈數(shù)億美元。
FTX未來基金最初旨在支持與“有效利他主義”理念一致的慈善項(xiàng)目,但后來該基金被揭露其資金來源于班克曼-弗里德的FTX加密貨幣交易所,這些資金實(shí)質(zhì)上是挪用的用戶賬戶資金。(目前沒有證據(jù)證明在FTX未來基金工作的任何人知曉資金被盜用,或參與任何違法行為。)
在FTX未來基金期間,阿申布倫納曾與一個(gè)小團(tuán)隊(duì)共事,其中包括有效利他主義運(yùn)動的發(fā)起人之一威廉·麥克阿斯基爾以及阿維塔爾·巴爾維特。后者現(xiàn)任Anthropic首席執(zhí)行官達(dá)里奧·阿莫代伊的幕僚長。據(jù)態(tài)勢感知基金的發(fā)言人透露,巴爾維特目前已與阿申布倫納訂婚。巴爾維特在2024年6月發(fā)表的一篇文章中寫道,“未來五年可能是我工作的最后幾年”,因?yàn)锳GI可能“終結(jié)我所理解的就業(yè)形態(tài)”。這一觀點(diǎn)恰好與阿申布倫納堅(jiān)信的另一面形成鮮明對照——他認(rèn)為,同樣的技術(shù)將讓他的投資者實(shí)現(xiàn)巨額財(cái)富增長。
然而,隨著班克曼-弗里德的FTX帝國于2022年11月轟然崩塌,F(xiàn)TX未來基金的慈善項(xiàng)目也隨之土崩瓦解。阿申布倫納在接受德瓦克什·帕特爾采訪時(shí)表示:“我們是一個(gè)很小的團(tuán)隊(duì),但從某一天起,一切都不復(fù)存在,還被卷入了一場巨大的欺詐案。那段經(jīng)歷極其艱難。”
然而,在FTX倒閉僅僅幾個(gè)月后,阿申布倫納再度回到公眾視野——這一次是在OpenAI。2023年,他加入了該公司新成立的“超級對齊”團(tuán)隊(duì),致力于研究一個(gè)迄今無人真正解決的問題:如何引導(dǎo)并控制未來那些智能水平遠(yuǎn)超人類、甚至可能超過全人類智慧總和的AI系統(tǒng)。現(xiàn)有方法如“基于人類反饋的強(qiáng)化學(xué)習(xí)”(RLHF)在當(dāng)前模型中雖取得一定成效,但其前提是人類能夠理解并評估AI的輸出,而一旦系統(tǒng)的智能超越人類理解范圍,這一前提將不復(fù)存在。
德克薩斯大學(xué)計(jì)算機(jī)科學(xué)教授亞倫森早于阿申布倫納加入OpenAI。他表示,阿申布倫納最令他印象深刻的是那種“立即行動”的天賦。亞倫森當(dāng)時(shí)正致力于為ChatGPT輸出內(nèi)容添加“水印”,以便更容易識別AI生成文本。他表示:“我已經(jīng)提出了一個(gè)方案,但這個(gè)想法一直懸而未決。而利奧波德立刻表示:‘是的,我們必須做這件事,我來負(fù)責(zé)推進(jìn)。’”
不過,也有人對他的印象截然不同,認(rèn)為他在政治敏感度上顯得笨拙,有時(shí)顯得傲慢。一位現(xiàn)任OpenAI研究員回憶道:“他從來不怕在會議上說話尖銳,或者惹惱上級,這令我震驚。” 另一位前OpenAI員工表示,自己第一次注意到阿申布倫納,是在一次公司全員會議上聽他發(fā)表演講。那場演講的主題,后來成為《態(tài)勢感知》中的核心觀點(diǎn)。他形容阿申布倫納“有點(diǎn)難以相處”。多位研究人員還提到一場假日派對。當(dāng)時(shí)在閑聊的過程中,阿申布倫納竟直接向時(shí)任Scale AI首席執(zhí)行官汪滔透露了OpenAI所擁有的GPU數(shù)量。其中一人表示,他“就那樣直接公開說了出來”。兩名消息人士告訴《財(cái)富》,他們親耳聽到了這番話。他們解釋稱,許多人對阿申布倫納如此隨意地談?wù)摌O為敏感的信息感到震驚。但汪滔和阿申布倫納均通過發(fā)言人否認(rèn)曾進(jìn)行過這樣的交流。
阿申布倫納的代表對《財(cái)富》雜志表示:“這一說法完全不屬實(shí)。利奧波德從未與汪滔討論過任何內(nèi)部信息。利奧波德經(jīng)常討論AI的擴(kuò)展趨勢,例如在《態(tài)勢感知》中所闡述的內(nèi)容,這些討論均基于公開資料和行業(yè)趨勢。”
2024年4月,OpenAI解雇了阿申布倫納,給出的官方理由是其“泄露內(nèi)部信息”(這與他被曝向汪滔透露GPU數(shù)量的傳聞無關(guān))。兩個(gè)月后,他在德瓦克什的播客節(jié)目中回應(yīng)稱,所謂的“泄密”其實(shí)是一份頭腦風(fēng)暴文件,內(nèi)容涉及 “未來實(shí)現(xiàn)AGI過程中所需的準(zhǔn)備、安全與防護(hù)措施”。他將這份文件分享給了三位外部研究人員征求意見。在當(dāng)時(shí)的OpenAI,這種做法“完全正常”。他還表示,真正導(dǎo)致他被解雇的原因,是他此前撰寫的一份內(nèi)部備忘錄。在那份備忘錄中,他直言O(shè)penAI的安全體系“嚴(yán)重不足”。
據(jù)媒體報(bào)道,OpenAI通過發(fā)言人回應(yīng)稱,阿申布倫納在公司內(nèi)部(包括向董事會)提出的安全擔(dān)憂“并非導(dǎo)致其離職的原因”。該發(fā)言人還表示,公司“不認(rèn)同他后來關(guān)于OpenAI安全問題及其離職情況的諸多說法”。
無論如何,在阿申布倫納被解雇之際,OpenAI正經(jīng)歷更廣泛的動蕩:在數(shù)周內(nèi),由OpenAI聯(lián)合創(chuàng)始人兼首席科學(xué)家伊利亞·蘇茨克弗和AI研究員揚(yáng)·萊克領(lǐng)導(dǎo)、阿申布倫納曾任職的“超級對齊”團(tuán)隊(duì),因兩位負(fù)責(zé)人先后離職而宣告解散。
兩個(gè)月后,阿申布倫納發(fā)表了《態(tài)勢感知》,并推出了他的對沖基金。如此高效的行動,讓部分前同事猜測,他可能早在OpenAI任職期間就已開始為此布局。
回報(bào)與言論的較量
即便是持懷疑態(tài)度的人也承認(rèn),阿申布倫納成功抓住了當(dāng)下圍繞AGI的投資熱潮,確實(shí)得到了市場回報(bào),然而質(zhì)疑聲依然存在。一位如今創(chuàng)業(yè)的前OpenAI同事表示:“我實(shí)在想不出,誰會信任一個(gè)毫無基金管理經(jīng)驗(yàn)又如此年輕的人。除非我確信這只基金有非常嚴(yán)格的治理機(jī)制,否則我絕不會成為由一個(gè)年輕人操盤的基金的有限合伙人。”
也有人質(zhì)疑他以“AI恐懼”獲利的倫理問題。一位前OpenAI研究員表示:“許多人雖然認(rèn)同利奧波德的論點(diǎn),但并不贊成他通過渲染中美競爭或借助AGI熱潮來籌資,即使這種熱潮有其合理性。”另一位研究員則直言:“要么他現(xiàn)在已經(jīng)不再認(rèn)為(AI帶來的生存風(fēng)險(xiǎn))是什么大問題,要么他多少有些不夠真誠。”
一位“有效利他主義”社群的前策略師表示,該領(lǐng)域的許多人“對他感到不滿”,尤其反感他宣揚(yáng)存在一場“AGI競賽”,因?yàn)檫@種說法“最終會變成自我實(shí)現(xiàn)的預(yù)言”。盡管通過煽動“軍備競賽”概念獲利可以被合理化,畢竟有效利他主義者將為了未來捐贈而賺錢視為一種美德,但這位前策略師認(rèn)為“以利奧波德基金的體量來看,他已經(jīng)在切實(shí)地提供資本”,而這本身便具有更沉重的道德分量。
亞倫森指出,更深層的擔(dān)憂在于:阿申布倫納所傳遞的信息——即美國必須不惜一切代價(jià)加速發(fā)展AI以贏得科技競賽——恰恰在華盛頓找到了受眾。而此時(shí),像馬克·安德森、大衛(wèi)·薩克斯和邁克爾·克拉齊奧斯等“加速主義”代表人物的聲音正日益高漲。亞倫森表示:“即便利奧波德本人未必這樣認(rèn)為,他的文章也會被那些持此觀點(diǎn)的人所利用。”如果真是如此,那么他留下的最大遺產(chǎn)或許并非一家對沖基金,而是一個(gè)納入全球科技競賽中的更宏大的思想框架。
倘若這一判斷成真,阿申布倫納的真正影響力將不在于投資回報(bào),而在于話語塑造——他的思想如何從硅谷蔓延至華盛頓,進(jìn)而影響政策討論。這也凸顯出他故事的核心悖論:在一些人眼中,他是看清時(shí)代走向的天才;而在另一些人看來,他則是一個(gè)善于操弄敘事、將AI安全焦慮包裝成投資推介的“權(quán)謀人物”。無論哪種說法正確,如今已有數(shù)十億美元取決于他對AGI的賭局能否成功。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
在當(dāng)下這場席卷全球的AI熱潮中,鮮有故事比利奧波德·阿申布倫納的經(jīng)歷更引人注目。
這位23歲年輕人的職業(yè)生涯開局并不順利:他曾在薩姆·班克曼-弗里德現(xiàn)已破產(chǎn)的FTX加密貨幣交易所的慈善部門工作,之后在AI領(lǐng)域最具影響力的公司之一OpenAI度過了頗具爭議的一年,并最終被解雇。然而,在被該公司辭退僅兩個(gè)月后,阿申布倫納撰寫了一份AI宣言,引發(fā)網(wǎng)絡(luò)熱議,美國總統(tǒng)唐納德·特朗普的女兒伊萬卡甚至都在社交媒體上為其點(diǎn)贊。隨后,他以這份宣言作為跳板,創(chuàng)立了一家對沖基金,如今該基金的資產(chǎn)管理規(guī)模超過15億美元。按照對沖基金的標(biāo)準(zhǔn),這家基金規(guī)模一般,但對于一個(gè)剛剛大學(xué)畢業(yè)的年輕人而言卻堪稱傳奇。從哥倫比亞大學(xué)(Columbia)畢業(yè)僅四年,阿申布倫納就已經(jīng)能與科技公司CEO、投資者和政策制定者私下里侃侃而談,被他們視為AI時(shí)代的“先知”。
阿申布倫納的崛起令人瞠目,這也讓許多人不僅好奇這位出生于德國的年輕AI研究員的成功秘訣,更質(zhì)疑圍繞他的熱度是否名副其實(shí)。在一些人眼中,阿申布倫納堪稱罕見的天才。他比任何人都更清晰地洞察到時(shí)代的關(guān)鍵節(jié)點(diǎn):類人通用人工智能(AGI)的到來、中國在AI競賽中的加速崛起,以及先行者將收獲巨額財(cái)富等。但在另一些人看來,包括幾位OpenAI前同事,他不過是個(gè)幸運(yùn)的新手,沒有任何金融從業(yè)記錄,只是將一場AI的狂潮重新包裝成一份對沖基金的募資說辭。
阿申布倫納的迅速崛起,展現(xiàn)了硅谷如何將時(shí)代精神轉(zhuǎn)化為資本,以及資本又如何進(jìn)一步轉(zhuǎn)化為影響力。批評者質(zhì)疑,他創(chuàng)立對沖基金只是為了將那些真?zhèn)坞y辨的科技預(yù)言變現(xiàn)獲利;但他的朋友們則有著不同的解讀。如Anthropic研究員肖爾托·道格拉斯將此視為一種“變革理論”。道格拉斯解釋說,阿申布倫納正在利用這家對沖基金在金融體系中樹立起可信的發(fā)言地位:“他的意思是——‘我對世界未來的發(fā)展方向有著極強(qiáng)的信念,而我正在以真金白銀來驗(yàn)證這一信念。’”
但這也引出了一個(gè)耐人尋味的問題:為何如此多人愿意信任這位初出茅廬的新秀?
答案并不簡單。根據(jù)《財(cái)富》記者對十余位阿申布倫納的朋友、前同事、熟人,以及多位投資人和硅谷業(yè)內(nèi)人士的采訪,一個(gè)主題反復(fù)出現(xiàn):阿申布倫納善于捕捉那些在硅谷實(shí)驗(yàn)室中逐漸積聚勢能的理念,并將它們整合成一套連貫而令人信服的敘事體系。對那些風(fēng)險(xiǎn)偏好旺盛的投資者而言,這套敘事就像一份精心搭配的“招牌特餐”,讓人難以拒絕。
阿申布倫納拒絕就本文置評。多位消息人士因擔(dān)憂談?wù)撛贏I圈內(nèi)具有巨大權(quán)力與影響力的人物可能帶來風(fēng)險(xiǎn),要求匿名接受采訪。
許多人在談及阿申布倫納時(shí),語氣中既帶著欽佩,也流露出謹(jǐn)慎。“極度專注”、“極其聰明”、“莽撞”、“自信”是他們常用的形容詞。不止一人形容他身上帶有“神童氣質(zhì)”,正是硅谷歷來樂于加冕的那類人物。也有人指出,他的思想并非特別新穎,只是包裝巧妙、時(shí)機(jī)得當(dāng)。而在批評者看來,他更像是一個(gè)制造熱度的高手,而非洞察未來的思想家。但《財(cái)富》雜志采訪的多位投資人卻持不同意見,他們認(rèn)為,阿申布倫納的文章與早期投資布局展現(xiàn)出非凡的遠(yuǎn)見卓識。
然而,可以肯定的是,阿申布倫納的崛起并非偶然,而是多重力量交匯的產(chǎn)物:全球資本正競相涌入AI賽道;硅谷則癡迷于實(shí)現(xiàn)“通用人工智能”,即能與人類智慧相媲美、甚至超越人類的AI;與此同時(shí),地緣政治格局也將AI的發(fā)展描繪成一場與中國之間的科技軍備競賽。
勾勒未來
在AI領(lǐng)域的某些小圈子里,利奧波德·阿申布倫納早已小有名氣。早在加入OpenAI之前,他就撰寫過多篇在AI安全領(lǐng)域內(nèi)部流傳的博客、論文和研究文章。但對大多數(shù)人而言,他幾乎是在一夜之間嶄露頭角。那是在2024年6月,他在網(wǎng)上自費(fèi)發(fā)表了一篇長達(dá)165頁的專著——《態(tài)勢感知:未來十年》(Situational Awareness: The Decade Ahead)。這篇長文的標(biāo)題借用了AI圈內(nèi)早已熟知的一個(gè)術(shù)語——“態(tài)勢感知”,它通常指AI模型開始意識到自身處境,這被視為一種安全風(fēng)險(xiǎn)。而阿申布倫納卻賦予了這一概念全新的含義:他主張政府與投資者必須正視AGI即將到來的速度,以及一旦美國落后所面臨的巨大風(fēng)險(xiǎn)。
從某種意義上說,阿申布倫納希望這份宣言能成為AI時(shí)代的“長電報(bào)”,就像當(dāng)年美國外交官、蘇聯(lián)問題專家喬治·凱南在那封著名電報(bào)中所做的那樣,喚醒美國精英階層對歐洲面臨蘇聯(lián)威脅的警覺。在這篇宣言的引言中,阿申布倫納描繪了一個(gè)他聲稱只有少數(shù)幾百名“有先見之明”的人才能看見的未來,“其中大多數(shù)人在舊金山及各大AI實(shí)驗(yàn)室”。毫不意外,他也將自己歸入擁有“態(tài)勢感知”的那類人;而世界其他地方的人們“對即將到來的沖擊毫無察覺”。在大多數(shù)人眼中,AI要么只是炒作,要么充其量是又一次互聯(lián)網(wǎng)級別的技術(shù)變革。而阿申布倫納堅(jiān)稱自己看得更為清晰:大語言模型正以指數(shù)級速度進(jìn)化,快速邁向AGI,并最終超越人類智慧,進(jìn)入“超級智能”時(shí)代——這一進(jìn)程將帶來深遠(yuǎn)的地緣政治影響,同時(shí)也為那些先行者帶來本世紀(jì)最大規(guī)模的經(jīng)濟(jì)紅利。
為了強(qiáng)調(diào)這一點(diǎn),阿申布倫納援引了2020年初新冠疫情的例子——他認(rèn)為,當(dāng)時(shí)只有極少數(shù)人真正理解到疫情指數(shù)級傳播的含義,意識到隨之而來的經(jīng)濟(jì)沖擊的規(guī)模,并在市場崩盤前通過做空獲利。他寫道:“我所能做的只是買口罩并做空市場。”同樣地,他強(qiáng)調(diào)如今也只有極少數(shù)人真正明白AGI到來的速度,而那些先行者將有機(jī)會獲得歷史性收益。而他再次將自己列入有先見之明的少數(shù)人之一。
不過,《態(tài)勢感知》的核心論點(diǎn)并非與新冠疫情的類比,而是數(shù)學(xué)本身揭示了未來的發(fā)展方向:擴(kuò)展曲線顯示,只要在相同的基礎(chǔ)算法上不斷增加數(shù)據(jù)量與算力,AI的能力就會呈指數(shù)級提升。
道格拉斯現(xiàn)任Anthropic強(qiáng)化學(xué)習(xí)擴(kuò)展項(xiàng)目的技術(shù)負(fù)責(zé)人。他是阿申布倫納的朋友兼前室友,曾多次與他討論這篇專著的內(nèi)容。他在接受《財(cái)富》雜志采訪時(shí)表示,這篇文章將許多AI研究人員長期以來的感受凝聚為清晰的論述。道格拉斯表示:“如果我們相信這條趨勢線會持續(xù)下去,那結(jié)局可能會相當(dāng)瘋狂。”與那些專注于每次連續(xù)模型發(fā)布漸進(jìn)式改進(jìn)的人不同,阿申布倫納愿意“真正押注在指數(shù)級增長上”。
一篇走紅的文章
關(guān)于AI風(fēng)險(xiǎn)與戰(zhàn)略,每年都有大量篇幅冗長、內(nèi)容晦澀的論文在圈子中流傳,但大多數(shù)論文只是在少數(shù)小眾論壇上引發(fā)短暫爭論后便銷聲匿跡,比如由AI理論家、知名“末日論者”埃利澤·尤德科夫斯基創(chuàng)辦的網(wǎng)站LessWrong,這里已成為理性主義與AI安全思想的聚集地。
但《態(tài)勢感知》帶來的沖擊卻截然不同。曾在OpenAI與阿申布倫納共事兩年的德克薩斯大學(xué)奧斯汀分校(UT Austin)計(jì)算機(jī)科學(xué)教授斯科特·亞倫森回憶起自己最初的反應(yīng)是:“天哪,又來一篇這種文章。”但讀完后,他對《財(cái)富》雜志表示:“我感覺這篇文章會讓某位將軍或國家安全官員讀到后下令:‘這件事必須立刻采取行動。’”他隨后在博客中寫道,這篇文章是“我讀過的最非凡的作品之一”。亞倫森指出,阿申布倫納提出的論點(diǎn)是:“即便經(jīng)歷了ChatGPT及其之后的一切,世界仍遠(yuǎn)未真正意識到即將到來的沖擊。”
一位長期從事AI治理研究的專家稱這篇文章是“一項(xiàng)了不起的成果”,但同時(shí)強(qiáng)調(diào)其中的觀點(diǎn)并不新鮮:“他基本上是把前沿AI實(shí)驗(yàn)室內(nèi)部早已形成的共識,重新整理成一份包裝精良、極具說服力且易于理解的作品。”其結(jié)果是,在全球AI討論達(dá)到白熱化的時(shí)刻,將這種原本只在業(yè)內(nèi)流通的思維,推向更廣泛的公眾視野。
在AI安全研究員群體中,這篇文章引發(fā)了更為激烈的分歧。對這些主要關(guān)注AI可能對人類構(gòu)成生存威脅的研究員而言,阿申布倫納的作品更像是一種“背叛”,尤其是因?yàn)樗救苏浅錾碛谶@一圈子。許多人認(rèn)為,他們原本呼吁審慎與監(jiān)管的論點(diǎn),被他改造成了一份面向投資者的推銷文案。一位前OpenAI治理研究員說道:“一些極度擔(dān)憂[生存風(fēng)險(xiǎn)]的人現(xiàn)在非常反感利奧波德,因?yàn)樗麄冇X得他出賣了理想。”也有人認(rèn)同他的大部分預(yù)測,并認(rèn)為他將這些預(yù)測傳播給更廣泛受眾是有價(jià)值的。
但即便是批評者也不得不承認(rèn),阿申布倫納在敘事包裝與傳播方面有非凡的天賦。另一位前OpenAI研究員表示:“他非常擅長把握時(shí)代脈搏,知道人們關(guān)心什么、什么內(nèi)容容易引發(fā)熱議。這就是他的超能力。他懂得如何通過塑造一種迎合當(dāng)下情緒的敘事,來吸引有權(quán)勢者的注意,比如美國必須更加重視AI安全等。即便細(xì)節(jié)未必準(zhǔn)確,時(shí)機(jī)卻拿捏得恰到好處。”
正因把握住了時(shí)機(jī),這篇文章幾乎令人無法忽視。科技創(chuàng)始人和投資者們以處理熱門投資意向書的緊迫感,爭相轉(zhuǎn)發(fā)《態(tài)勢感知》;而政策制定者與國家安全官員則像傳閱一份最刺激的NSA機(jī)密情報(bào)評估那樣,在內(nèi)部廣泛流傳。
正如一位現(xiàn)任OpenAI員工所言,阿申布倫納的真正本領(lǐng)在于“他總能預(yù)判冰球?qū)⒁蚰睦铩!?/p>
宏大敘事與資本運(yùn)作的結(jié)合
就在文章發(fā)布的同時(shí),阿申布倫納創(chuàng)立了以“態(tài)勢感知”(Situational Awareness LP)為名的對沖基金。這家圍繞AGI主題構(gòu)建的對沖基金,主要投資上市公司而非私營初創(chuàng)企業(yè)。
該基金的初始資金來自多位硅谷重量級人物,包括投資人、現(xiàn)任Meta AI產(chǎn)品負(fù)責(zé)人納特·弗里德曼。據(jù)報(bào)道,阿申布倫納在2023年弗里德曼讀過他的一篇博客后與他取得聯(lián)系。此外還有弗里德曼的投資合伙人丹尼爾·格羅斯以及Stripe聯(lián)合創(chuàng)始人帕特里克·科里森和約翰·科里森。據(jù)稱帕特里克·科里森早在2021年由一位關(guān)系人安排的私人晚宴上與阿申布倫納相識,晚宴的目的是讓他們交流共同感興趣的話題。阿申布倫納還邀請45歲的AI預(yù)測與治理研究員卡爾·舒爾曼擔(dān)任基金研究主管。舒爾曼在AI安全領(lǐng)域人脈深厚,曾在彼得·泰爾旗下的Clarium Capital工作。
在配合基金發(fā)布的一檔時(shí)長四小時(shí)的播客中(對談?wù)邽榈峦呖耸病づ撂貭枺⑸瓴紓惣{強(qiáng)調(diào)了他所預(yù)測的AGI到來后的爆炸式增長。他表示“之后的十年同樣會極其瘋狂”,屆時(shí)“資本將變得至關(guān)重要”。他表示,如果運(yùn)作得當(dāng),“將有豐厚的回報(bào)。如果市場從明天起完全消化AGI的預(yù)期,你也許能獲得百倍回報(bào)。”
這份宣言與基金相輔相成:一邊是一部篇幅堪比專著的投資論綱,另一邊則是一位信念堅(jiān)定、愿意以真金白銀下注的預(yù)言家。事實(shí)證明,這樣的組合對某類投資者而言幾乎具有無法抗拒的吸引力。一位前OpenAI研究員指出,弗里德曼素以“把握時(shí)代脈搏”著稱,他善于支持那些能精準(zhǔn)捕捉當(dāng)下氛圍、并將其轉(zhuǎn)化為影響力的人。而支持阿申布倫納,正完美契合了這一投資邏輯。
態(tài)勢感知基金的投資策略相當(dāng)直接:押注那些有可能從AI浪潮中受益的全球股票,涵蓋半導(dǎo)體、基礎(chǔ)設(shè)施與電力公司,同時(shí)做空可能在AI時(shí)代落后的行業(yè)來對沖風(fēng)險(xiǎn)。公開文件揭示了該基金的部分持倉情況:今年6月提交給美國證券交易委員會(SEC)的文件顯示,該基金持有英特爾(Intel)、博通(Broadcom)、Vistra以及前比特幣礦企Core Scientific等美國公司的股份。值得注意的是,CoreWeave已于7月宣布將收購Core Scientific。上述企業(yè)都被視為AI基礎(chǔ)設(shè)施建設(shè)的主要受益方。到目前為止,這一投資布局已帶來回報(bào)。該基金的資產(chǎn)規(guī)模迅速擴(kuò)大至超過15億美元,并在今年上半年實(shí)現(xiàn)了47%的凈收益(扣除費(fèi)用后)。
態(tài)勢感知基金的發(fā)言人表示,投資者來自全球各地,包括美國西海岸的創(chuàng)業(yè)者、家族理財(cái)辦公室、機(jī)構(gòu)投資者及捐贈基金等。發(fā)言人同時(shí)透露,阿申布倫納“幾乎將自己的全部凈資產(chǎn)都投入了這只基金”。
當(dāng)然,任何關(guān)于美國對沖基金持倉情況的公開信息都并不完整。公開披露的13F文件僅涵蓋在美國上市股票中的多頭頭寸,而空頭、衍生品及海外投資等均無需披露,這也給該基金的真實(shí)投資動向增添了一層神秘色彩。盡管如此,一些觀察人士質(zhì)疑,阿申布倫納的早期業(yè)績究竟源于投資能力,還是純屬時(shí)機(jī)巧合。例如,該基金在第一季度的申報(bào)文件中披露持有約4.59億美元的英特爾看漲期權(quán)。在隨后幾個(gè)月,英特爾股價(jià)在獲得聯(lián)邦投資及英偉達(dá)(Nvidia)追加50億美元入股后大幅上漲,這筆持倉顯得“頗有先見之明”。
但至少部分經(jīng)驗(yàn)豐富的金融界人士對他的看法已經(jīng)有所不同。資深對沖基金投資者格雷厄姆·鄧肯以個(gè)人身份投資了態(tài)勢感知基金,現(xiàn)任該基金顧問。鄧肯表示,阿申布倫納兼具業(yè)內(nèi)洞察與大膽投資策略的組合給他留下了深刻印象。鄧肯表示:“我覺得他的文章非常有啟發(fā)性。”他補(bǔ)充道,阿申布倫納與舒爾曼并非那種在外圍尋找機(jī)會的局外人,而是基于自身認(rèn)知親手投資體系的圈內(nèi)人。這家基金的投資邏輯讓他想起那些在次貸危機(jī)爆發(fā)前就察覺到風(fēng)險(xiǎn)的少數(shù)“逆勢投資者”,如因?yàn)楸贿~克爾·劉易斯寫進(jìn)《大空頭》(The Big Short)而名聲大噪的邁克爾·伯里。鄧肯表示:“如果你想擁有差異性認(rèn)知,保持一點(diǎn)與眾不同的心態(tài)會有所幫助。”
鄧肯舉了一個(gè)例子:今年1月,中國初創(chuàng)公司深度求索(DeepSeek)發(fā)布開源大語言模型R1,盡管中國仍面臨資金限制和出口管制等方面的制約,但這一事件被許多人稱作中國AI崛起的“斯普特尼克時(shí)刻”。鄧肯表示,當(dāng)多數(shù)投資者因這則消息陷入恐慌時(shí),阿申布倫納與舒爾曼早就開始密切關(guān)注這一動態(tài),并認(rèn)為市場拋售是過度反應(yīng)。兩人選擇逆勢買入,而非跟風(fēng)賣出。有報(bào)道稱,當(dāng)時(shí)甚至有一家大型科技基金在分析師一句“利奧波德說沒問題”的建議后,放棄了清倉決定。鄧肯表示,那一刻奠定了阿申布倫納在業(yè)內(nèi)的信譽(yù),盡管他也坦言:“他仍有可能被證明是錯(cuò)誤的。”
態(tài)勢感知基金的另一位投資者本人管理著一家領(lǐng)先的對沖基金。他對《財(cái)富》雜志表示,當(dāng)他問阿申布倫納為何創(chuàng)立專注于AI的對沖基金而非風(fēng)險(xiǎn)投資基金(這似乎是最顯而易見的選擇)時(shí),阿申布倫納的回答讓他印象深刻。
這位投資者表示:“他說,AGI將對全球經(jīng)濟(jì)產(chǎn)生深遠(yuǎn)影響,而充分利用它的唯一方式就是在全球流動性最強(qiáng)的市場中表達(dá)投資觀點(diǎn)。他們學(xué)習(xí)曲線的攀升速度之快讓我感到震驚……在公開市場中,他們對AI投資的理解遠(yuǎn)比我接觸過的任何團(tuán)隊(duì)都要成熟。”
從哥倫比亞大學(xué)“神童”到FTX與OpenAI
阿申布倫納出生于德國,父母均為醫(yī)生。他15歲進(jìn)入哥倫比亞大學(xué)就讀,19歲以最佳畢業(yè)生的身份畢業(yè)。一位長期從事AI治理研究、并自稱與阿申布倫納相識的研究員回憶說,她在阿申布倫納讀本科時(shí)第一次聽說過他的名字。
她表示:“我當(dāng)時(shí)聽人提起他說:‘哦,我們聽說過這個(gè)叫利奧波德·阿申布倫納的年輕人,他好像很聰明。’感覺他就是一位神童。"
這種“天才少年的”聲譽(yù)此后愈發(fā)鞏固。17歲時(shí),阿申布倫納獲得經(jīng)濟(jì)學(xué)家泰勒·科文創(chuàng)辦的新興風(fēng)險(xiǎn)基金(Emergent Ventures)資助,科文稱他為“經(jīng)濟(jì)學(xué)奇才”。在哥倫比亞大學(xué)就讀期間,他還在全球優(yōu)先研究院(Global Priorities Institute)實(shí)習(xí),與經(jīng)濟(jì)學(xué)家菲利普·特拉梅爾合著論文,并為Stripe資助的出版物《Works in Progress》撰寫文章,這使他在科技與知識界進(jìn)一步建立了立足點(diǎn)。
那時(shí)的阿申布倫納,已深度融入“有效利他主義”( Effective Altruism)社群——這是一個(gè)以哲學(xué)為導(dǎo)向、在AI安全領(lǐng)域頗具影響力、具有爭議的運(yùn)動。他還共同創(chuàng)立了哥倫比亞大學(xué)的有效利他主義分會。正是這一人脈網(wǎng)絡(luò),最終讓他進(jìn)入了FTX未來基金(FTX Future Fund)。該慈善機(jī)構(gòu)由加密貨幣交易所創(chuàng)始人山姆·班克曼-弗里德創(chuàng)立。班克曼-弗里德同樣是有效利他主義的擁護(hù)者,他曾向包括AI治理研究在內(nèi)的多個(gè)與有效利他主義慈善目標(biāo)一致的領(lǐng)域捐贈數(shù)億美元。
FTX未來基金最初旨在支持與“有效利他主義”理念一致的慈善項(xiàng)目,但后來該基金被揭露其資金來源于班克曼-弗里德的FTX加密貨幣交易所,這些資金實(shí)質(zhì)上是挪用的用戶賬戶資金。(目前沒有證據(jù)證明在FTX未來基金工作的任何人知曉資金被盜用,或參與任何違法行為。)
在FTX未來基金期間,阿申布倫納曾與一個(gè)小團(tuán)隊(duì)共事,其中包括有效利他主義運(yùn)動的發(fā)起人之一威廉·麥克阿斯基爾以及阿維塔爾·巴爾維特。后者現(xiàn)任Anthropic首席執(zhí)行官達(dá)里奧·阿莫代伊的幕僚長。據(jù)態(tài)勢感知基金的發(fā)言人透露,巴爾維特目前已與阿申布倫納訂婚。巴爾維特在2024年6月發(fā)表的一篇文章中寫道,“未來五年可能是我工作的最后幾年”,因?yàn)锳GI可能“終結(jié)我所理解的就業(yè)形態(tài)”。這一觀點(diǎn)恰好與阿申布倫納堅(jiān)信的另一面形成鮮明對照——他認(rèn)為,同樣的技術(shù)將讓他的投資者實(shí)現(xiàn)巨額財(cái)富增長。
然而,隨著班克曼-弗里德的FTX帝國于2022年11月轟然崩塌,F(xiàn)TX未來基金的慈善項(xiàng)目也隨之土崩瓦解。阿申布倫納在接受德瓦克什·帕特爾采訪時(shí)表示:“我們是一個(gè)很小的團(tuán)隊(duì),但從某一天起,一切都不復(fù)存在,還被卷入了一場巨大的欺詐案。那段經(jīng)歷極其艱難。”
然而,在FTX倒閉僅僅幾個(gè)月后,阿申布倫納再度回到公眾視野——這一次是在OpenAI。2023年,他加入了該公司新成立的“超級對齊”團(tuán)隊(duì),致力于研究一個(gè)迄今無人真正解決的問題:如何引導(dǎo)并控制未來那些智能水平遠(yuǎn)超人類、甚至可能超過全人類智慧總和的AI系統(tǒng)。現(xiàn)有方法如“基于人類反饋的強(qiáng)化學(xué)習(xí)”(RLHF)在當(dāng)前模型中雖取得一定成效,但其前提是人類能夠理解并評估AI的輸出,而一旦系統(tǒng)的智能超越人類理解范圍,這一前提將不復(fù)存在。
德克薩斯大學(xué)計(jì)算機(jī)科學(xué)教授亞倫森早于阿申布倫納加入OpenAI。他表示,阿申布倫納最令他印象深刻的是那種“立即行動”的天賦。亞倫森當(dāng)時(shí)正致力于為ChatGPT輸出內(nèi)容添加“水印”,以便更容易識別AI生成文本。他表示:“我已經(jīng)提出了一個(gè)方案,但這個(gè)想法一直懸而未決。而利奧波德立刻表示:‘是的,我們必須做這件事,我來負(fù)責(zé)推進(jìn)。’”
不過,也有人對他的印象截然不同,認(rèn)為他在政治敏感度上顯得笨拙,有時(shí)顯得傲慢。一位現(xiàn)任OpenAI研究員回憶道:“他從來不怕在會議上說話尖銳,或者惹惱上級,這令我震驚。” 另一位前OpenAI員工表示,自己第一次注意到阿申布倫納,是在一次公司全員會議上聽他發(fā)表演講。那場演講的主題,后來成為《態(tài)勢感知》中的核心觀點(diǎn)。他形容阿申布倫納“有點(diǎn)難以相處”。多位研究人員還提到一場假日派對。當(dāng)時(shí)在閑聊的過程中,阿申布倫納竟直接向時(shí)任Scale AI首席執(zhí)行官汪滔透露了OpenAI所擁有的GPU數(shù)量。其中一人表示,他“就那樣直接公開說了出來”。兩名消息人士告訴《財(cái)富》,他們親耳聽到了這番話。他們解釋稱,許多人對阿申布倫納如此隨意地談?wù)摌O為敏感的信息感到震驚。但汪滔和阿申布倫納均通過發(fā)言人否認(rèn)曾進(jìn)行過這樣的交流。
阿申布倫納的代表對《財(cái)富》雜志表示:“這一說法完全不屬實(shí)。利奧波德從未與汪滔討論過任何內(nèi)部信息。利奧波德經(jīng)常討論AI的擴(kuò)展趨勢,例如在《態(tài)勢感知》中所闡述的內(nèi)容,這些討論均基于公開資料和行業(yè)趨勢。”
2024年4月,OpenAI解雇了阿申布倫納,給出的官方理由是其“泄露內(nèi)部信息”(這與他被曝向汪滔透露GPU數(shù)量的傳聞無關(guān))。兩個(gè)月后,他在德瓦克什的播客節(jié)目中回應(yīng)稱,所謂的“泄密”其實(shí)是一份頭腦風(fēng)暴文件,內(nèi)容涉及 “未來實(shí)現(xiàn)AGI過程中所需的準(zhǔn)備、安全與防護(hù)措施”。他將這份文件分享給了三位外部研究人員征求意見。在當(dāng)時(shí)的OpenAI,這種做法“完全正常”。他還表示,真正導(dǎo)致他被解雇的原因,是他此前撰寫的一份內(nèi)部備忘錄。在那份備忘錄中,他直言O(shè)penAI的安全體系“嚴(yán)重不足”。
據(jù)媒體報(bào)道,OpenAI通過發(fā)言人回應(yīng)稱,阿申布倫納在公司內(nèi)部(包括向董事會)提出的安全擔(dān)憂“并非導(dǎo)致其離職的原因”。該發(fā)言人還表示,公司“不認(rèn)同他后來關(guān)于OpenAI安全問題及其離職情況的諸多說法”。
無論如何,在阿申布倫納被解雇之際,OpenAI正經(jīng)歷更廣泛的動蕩:在數(shù)周內(nèi),由OpenAI聯(lián)合創(chuàng)始人兼首席科學(xué)家伊利亞·蘇茨克弗和AI研究員揚(yáng)·萊克領(lǐng)導(dǎo)、阿申布倫納曾任職的“超級對齊”團(tuán)隊(duì),因兩位負(fù)責(zé)人先后離職而宣告解散。
兩個(gè)月后,阿申布倫納發(fā)表了《態(tài)勢感知》,并推出了他的對沖基金。如此高效的行動,讓部分前同事猜測,他可能早在OpenAI任職期間就已開始為此布局。
回報(bào)與言論的較量
即便是持懷疑態(tài)度的人也承認(rèn),阿申布倫納成功抓住了當(dāng)下圍繞AGI的投資熱潮,確實(shí)得到了市場回報(bào),然而質(zhì)疑聲依然存在。一位如今創(chuàng)業(yè)的前OpenAI同事表示:“我實(shí)在想不出,誰會信任一個(gè)毫無基金管理經(jīng)驗(yàn)又如此年輕的人。除非我確信這只基金有非常嚴(yán)格的治理機(jī)制,否則我絕不會成為由一個(gè)年輕人操盤的基金的有限合伙人。”
也有人質(zhì)疑他以“AI恐懼”獲利的倫理問題。一位前OpenAI研究員表示:“許多人雖然認(rèn)同利奧波德的論點(diǎn),但并不贊成他通過渲染中美競爭或借助AGI熱潮來籌資,即使這種熱潮有其合理性。”另一位研究員則直言:“要么他現(xiàn)在已經(jīng)不再認(rèn)為(AI帶來的生存風(fēng)險(xiǎn))是什么大問題,要么他多少有些不夠真誠。”
一位“有效利他主義”社群的前策略師表示,該領(lǐng)域的許多人“對他感到不滿”,尤其反感他宣揚(yáng)存在一場“AGI競賽”,因?yàn)檫@種說法“最終會變成自我實(shí)現(xiàn)的預(yù)言”。盡管通過煽動“軍備競賽”概念獲利可以被合理化,畢竟有效利他主義者將為了未來捐贈而賺錢視為一種美德,但這位前策略師認(rèn)為“以利奧波德基金的體量來看,他已經(jīng)在切實(shí)地提供資本”,而這本身便具有更沉重的道德分量。
亞倫森指出,更深層的擔(dān)憂在于:阿申布倫納所傳遞的信息——即美國必須不惜一切代價(jià)加速發(fā)展AI以贏得科技競賽——恰恰在華盛頓找到了受眾。而此時(shí),像馬克·安德森、大衛(wèi)·薩克斯和邁克爾·克拉齊奧斯等“加速主義”代表人物的聲音正日益高漲。亞倫森表示:“即便利奧波德本人未必這樣認(rèn)為,他的文章也會被那些持此觀點(diǎn)的人所利用。”如果真是如此,那么他留下的最大遺產(chǎn)或許并非一家對沖基金,而是一個(gè)納入全球科技競賽中的更宏大的思想框架。
倘若這一判斷成真,阿申布倫納的真正影響力將不在于投資回報(bào),而在于話語塑造——他的思想如何從硅谷蔓延至華盛頓,進(jìn)而影響政策討論。這也凸顯出他故事的核心悖論:在一些人眼中,他是看清時(shí)代走向的天才;而在另一些人看來,他則是一個(gè)善于操弄敘事、將AI安全焦慮包裝成投資推介的“權(quán)謀人物”。無論哪種說法正確,如今已有數(shù)十億美元取決于他對AGI的賭局能否成功。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
Of all the unlikely stories to emerge from the current AI frenzy, few are more striking than that of Leopold Aschenbrenner.
The 23-year-old’s career didn’t exactly start auspiciously: He spent time at the philanthropy arm of Sam Bankman-Fried’s now-bankrupt FTX cryptocurrency exchange before a controversial year at OpenAI, where he was ultimately fired. Then, just two months after being booted out of the most influential company in AI, he penned an AI manifesto that went viral—President Trump’s daughter Ivanka even praised it on social media—and used it as a launching pad for a hedge fund that now manages more than $1.5 billion. That’s modest by hedge-fund standards but remarkable for someone barely out of college. Just four years after graduating from Columbia, Aschenbrenner is holding private discussions with tech CEOs, investors, and policymakers who treat him as a kind of prophet of the AI age.
It’s an astonishing ascent, one that has many asking not just how this German-born early-career AI researcher pulled it off, but whether the hype surrounding him matches the reality. To some, Aschenbrenner is a rare genius who saw the moment—the coming of humanlike artificial general intelligence, China’s accelerating AI race, and the vast fortunes awaiting those who move first—more clearly than anyone else. To others, including several former OpenAI colleagues, he’s a lucky novice with no finance track record, repackaging hype into a hedge fund pitch.
His meteoric rise captures how Silicon Valley converts zeitgeist into capital—and how that, in turn, can be parlayed into influence. While critics question whether launching a hedge fund was simply a way to turn dubious techno-prophecy into profit, friends like Anthropic researcher Sholto Douglas frame it differently—as a “theory of change.” Aschenbrenner is using the hedge fund to garner a credible voice in the financial ecosystem, Douglas explained: “He is saying, ‘I have an extremely high conviction [that this is] how the world is going to evolve, and I am literally putting my money where my mouth is.”
But that also prompts the question: Why are so many willing to trust this newcomer?
The answer is complicated. In conversations with over a dozen friends, former colleagues, and acquaintances of Aschenbrenner, as well as investors and Silicon Valley insiders, one theme keeps surfacing: that Aschenbrenner has been able to seize ideas that have been gathering momentum across Silicon Valley’s labs and use them as ingredients for a coherent and convincing narrative that are like a blue plate special to investors with a healthy appetite for risk.
Aschenbrenner declined to comment for this story. A number of sources were granted anonymity owing to concerns over the potential consequences of speaking about people who wield considerable power and influence in AI circles.
Many spoke of Aschenbrenner with a mixture of admiration and wariness—“intense,” “scarily smart,” “brash,” “confident.” More than one described him as carrying the aura of a wunderkind, the kind of figure Silicon Valley has long been eager to anoint. Others, however, noted that his thinking wasn’t especially novel, just unusually well-packaged and well-timed. Yet, while critics dismiss him as more hype than insight, investors Fortune spoke with see him differently, crediting his essays and early portfolio bets with unusual foresight.
There is no doubt, however, that Aschenbrenner’s rise reflects a unique convergence: vast pools of global capital eager to ride the AI wave; a Valley enthralled by the prospect of achieving artificial general intelligence (AGI), or AI that matches or surpasses human intelligence; and a geopolitical backdrop that frames AI development as a technological arms race with China.
Sketching the future
Within certain corners of the AI world, Leopold Aschenbrenner was already familiar as someone who had written blog posts, essays, and research papers that circulated among AI safety circles, even before joining OpenAI. But for most people, he appeared seemingly overnight in June 2024. That’s when he self-published online a 165-page monograph called Situational Awareness: The Decade Ahead. The long essay borrowed for its title a phrase already familiar in AI circles, where “situational awareness” usually refers to models becoming aware of their own circumstances—a safety risk. But Aschenbrenner used it to mean something else entirely: the need for governments and investors to recognize how quickly AGI might arrive, and what was at stake if the U.S. fell behind.
In a sense, Aschenbrenner intended his manifesto to be the AI era’s equivalent of George Kennan’s “Long Telegram,” in which the American diplomat and Russia expert sought to awaken elite opinion in the U.S. to what he saw as the looming Soviet threat to Europe. In the introduction, Aschenbrenner sketched a future he claimed was visible only to a few hundred prescient people, “most of them in San Francisco and the AI labs.” Not surprisingly, he included himself among those with “situational awareness,” while the rest of the world had “not the faintest glimmer of what is about to hit them.” To most, AI looked like hype or, at best, another internet-scale shift. What he insisted he could see more clearly was that LLMs were improving at an exponential rate, scaling rapidly toward AGI, and then beyond to “superintelligence”—with geopolitical consequences and, for those who moved early, the chance to capture the biggest economic windfall of the century.
To drive the point home, he invoked the example of COVID in early 2020—arguing that only a few grasped the implications of a pandemic’s exponential spread, understood the scope of the coming economic shock, and profited by shorting before the crash. “All I could do is buy masks and short the market,” he wrote. Similarly, he emphasized that only a small circle today comprehends how quickly AGI is coming, and those who act early stand to capture historic gains. And once again, he cast himself among the prescient few.
But the core of Situational Awareness’s argument wasn’t the COVID parallel. It was the argument that the math itself—the scaling curves that suggested AI capabilities increased exponentially with the amount of data and compute thrown at the same basic algorithms—showed where things were headed.
Douglas, now a tech lead on scaling reinforcement learning at Anthropic, is both a friend and former roommate of Aschenbrenner’s who had conversations with him about the monograph. He told Fortune that the essay crystallized what many AI researchers had felt. ”If we believe that the trend line will continue, then we end up in some pretty wild places,” Douglas said. Unlike many who focused on the incremental progress of each successive model release, Aschenbrenner was willing to “really bet on the exponential,” he said.
An essay goes viral
Plenty of long, dense essays about AI risk and strategy circulate every year, most vanishing after brief debates in niche forums like LessWrong, a website founded by AI theorist and “doomer” extraordinaire Eliezer Yudkowsky that became a hub for rationalist and AI-safety ideas.
But Situational Awareness hit differently. Scott Aaronson, a computer science professor at UT Austin who spent two years at OpenAI overlapping with Aschenbrenner, remembered his initial reaction: “Oh man, another one.” But after reading, he told Fortune: “I had the sense that this is actually the document some general or national security person is going to read and say: ‘This requires action.’” In a blog post, he called the essay “one of the most extraordinary documents I’ve ever read,” saying Aschenbrenner “makes a case that, even after ChatGPT and all that followed it, the world still hasn’t come close to ‘pricing in’ what’s about to hit it.”
A longtime AI governance expert described the essays as “a big achievement,” but emphasized that the ideas were not new: “He basically took what was already common wisdom inside frontier AI labs and wrote it up in a very nicely packaged, compelling, easy-to-consume way.” The result was to make insider thinking legible to a much broader audience at a fever-pitch moment in the AI conversation.
Among AI safety researchers, who worry primarily about the ways in which AI might pose an existential risk to humanity, the essays were more divisive. For many, Aschenbrenner’s work felt like a betrayal, particularly because he had come out of those very circles. They felt their arguments urging caution and regulation had been repurposed into a sales pitch to investors. “Some people who are very worried about [existential risks] quite dislike Leopold now because of what he’s done—they basically think he sold out,” said one former OpenAI governance researcher. Others agreed with most of his predictions and saw value in amplifying them.
Still, even critics conceded his knack for packaging and marketing. “He’s very good at understanding the zeitgeist—what people are interested in and what could go viral,” said another former OpenAI researcher. “That’s his superpower. He knew how to capture the attention of powerful people by articulating a narrative very favorable to the mood of the moment: that the U.S. needed to beat China, that we needed to take AI security more seriously. Even if the details were wrong, the timing was perfect.”
That timing made the essays unavoidable. Tech founders and investors shared Situational Awareness with the sort of urgency usually reserved for hot term sheets, while policymakers and national security officials circulated it like the juiciest classified NSA assessment.
As one current OpenAI staffer put it, Aschenbrenner’s skill is “knowing where the puck is skating.”
A sweeping narrative paired with an investment vehicle
At the same time as the essays were released, Aschenbrenner launched Situational Awareness LP, a hedge fund built around the theme of AGI, with its bets placed in publicly traded companies rather than private startups.
The fund was seeded by Silicon Valley heavyweights like investor and current Meta AI product lead Nat Friedman—Aschenbrenner reportedly connected with him after Friedman read one of his blog posts in 2023—as well as Friedman’s investing partner Daniel Gross, and Patrick and John Collison, Stripe’s cofounders. Patrick Collison reportedly met Aschenbrenner at a 2021 dinner set up by a connection “to discuss their shared interests.” Aschenbrenner also brought on Carl Shulman—a 45-year-old AI forecaster and governance researcher with deep ties in the AI safety field and a past stint at Peter Thiel’s Clarium Capital—to be the new hedge fund’s director of research.
In a four-hour podcast with Dwarkesh Patel tied to the launch, Aschenbrenner touted the explosive growth he expects once AGI arrives, saying, “The decade after is also going to be wild,” in which “capital will really matter.” If done right, he said, “there’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x.”
Together, the manifesto and the fund reinforced each other: Here was a book-length investment thesis paired with a prognosticator with so much conviction he was willing to put serious money on the line. It proved an irresistible combination to a certain kind of investor. One former OpenAI researcher said Friedman is known for “zeitgeist hacking”—backing people who could capture the mood of the moment and amplify it into influence. Supporting Aschenbrenner fit that playbook perfectly.
Situational Awareness’s strategy is straightforward: It bets on global stocks likely to benefit from AI—semiconductors, infrastructure, and power companies—offset by shorts on industries that could lag behind. Public filings reveal part of the portfolio: A June SEC filing showed stakes in U.S. companies including Intel, Broadcom, Vistra, and former Bitcoin-miner Core Scientific (which CoreWeave announced it would acquire in July), all seen as beneficiaries of the AI build-out. So far, it has paid off: The fund quickly swelled to over $1.5 billion in assets and delivered 47% gains, after fees, in the first half of this year.
According to a spokesperson, Situational Awareness LP has global investors, including West Coast founders, family offices, institutions, and endowments. In addition, the spokesperson said, Aschenbrenner “has almost all of his net worth invested in the fund.”
To be sure, any picture of a U.S. hedge fund’s holdings is incomplete. The publicly available 13F filings only cover long positions in U.S.-listed stocks—shorts, derivatives, and international investments aren’t disclosed—adding an inevitable layer of mystery around what the fund is really betting on. Still, some observers have questioned whether Aschenbrenner’s early results reflect skill or fortunate timing. For example, his fund disclosed roughly $459 million in Intel call options in its first-quarter filing—positions that later looked prescient when Intel’s shares climbed over the summer following a federal investment and a subsequent $5 billion stake from Nvidia.
But at least some experienced financial industry professionals have come to view him differently. Veteran hedge fund investor Graham Duncan, who invested personally in Situational Awareness LP and now serves as an advisor to the fund, said he was struck by Aschenbrenner’s combination of insider perspective and bold investment strategy. “I found his paper provocative,” Duncan said, adding that Aschenbrenner and Shulman weren’t outsiders scanning opportunities but insiders building an investment vehicle around their view. The fund’s thesis reminded him of the few contrarians who spotted the subprime collapse before it hit—people like Michael Burry, whom Michael Lewis made famous in his book The Big Short. “If you want to have variant perception, it helps to be a little variant.”
He pointed to Situational Awareness’s reaction to Chinese startup DeepSeek’s January release of its R1 open-source LLM, which many dubbed a “Sputnik moment” that showcased China’s rising AI capabilities despite limited funding and export controls. While most investors panicked, he said Aschenbrenner and Shulman had already been tracking it and saw the selloff as an overreaction. They bought instead of sold, and even a major tech fund reportedly held back from dumping shares after an analyst said, “Leopold says it’s fine.” That moment, Duncan said, cemented Aschenbrenner’s credibility—though Duncan acknowledged, “He could yet be proven wrong.”
Another investor in Situational Awareness LP, who manages a leading hedge fund, told Fortune that he was struck by Aschenbrenner’s answer when asked why he was starting a hedge fund focused on AI rather than a VC fund, which seemed like the most obvious choice.
“He said that AGI was going to be so impactful to the global economy that the only way to fully capitalize on it was to express investment ideas in the most liquid markets in the world,” he said. “I am a bit stunned by how fast they have come up the learning curve … They are way more sophisticated on AI investing than anyone else I speak to in the public markets.“
A Columbia ‘whiz kid’ who went on to FTX and OpenAI
Aschenbrenner, born in Germany to two doctors, enrolled at Columbia when he was just 15 and graduated valedictorian at 19. The longtime AI governance researcher, who described herself as an acquaintance of Aschenbrenner’s, recalled that she first heard of him when he was still an undergraduate.
“I heard about him as, ‘Oh, we heard about this Leopold Aschenbrenner kid, he seems like a sharp guy,’” she said. “The vibe was very much a whiz kid sort of thing.”
That wunderkind reputation only deepened. At 17, Aschenbrenner won a grant from economist Tyler Cowen’s Emergent Ventures, and Cowen called him an “economics prodigy.” While still at Columbia, Aschenbrenner also interned at the Global Priorities Institute, coauthoring a paper with economist Philip Trammell, and contributed essays to Works in Progress, a Stripe-funded publication that gave him another foothold in the tech-intellectual world.
He was already embedded in the Effective Altruism community—a controversial philosophy-driven movement influential in AI safety circles—and cofounded Columbia’s EA chapter. That network eventually led him to a job at the FTX Future Fund, a charity founded by cryptocurrency exchange founder Sam Bankman-Fried. Bankman-Fried was another EA adherent who donated hundreds of millions of dollars to causes, including AI governance research, that aligned with EA’s philanthropic priorities.
The FTX Future Fund was designed to support EA-aligned philanthropic priorities, although it was later found to have used money from Bankman-Fried’s FTX cryptocurrency exchange that was essentially looted from account holders. (There is no evidence that anyone who worked at the FTX Future Fund knew the money was stolen or did anything illegal.)
At the FTX Future Fund, Aschenbrenner worked with a small team that included William MacAskill, a cofounder of Effective Altruism, and Avital Balwit—now chief of staff to Anthropic CEO Dario Amodei and, according to a Situational Awareness LP spokesperson, currently engaged to Aschenbrenner. Balwit wrote in a June 2024 essay that “these next five years might be the last few years that I work,” because AGI might “end employment as I know it”—a striking mirror image of Aschenbrenner’s conviction that the same technology will make his investors rich.
But when Bankman-Fried’s FTX empire collapsed in November 2022, the Future Fund philanthropic effort imploded. “We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud,” Aschenbrenner told Dwarkesh Patel. “That was incredibly tough.”
Just months after FTX collapsed, however, Aschenbrenner reemerged—at OpenAI. He joined the company’s newly launched “superalignment” team in 2023, created to tackle a problem no one yet knows how to solve: how to steer and control future AI systems that would be far smarter than any human being, and perhaps smarter than all of humanity put together. Existing methods like reinforcement learning from human feedback (RLHF) had proven somewhat effective for today’s models, but they depend on humans being able to evaluate outputs—something which might not be possible if systems surpassed human comprehension.
Aaronson, the UT computer science professor, joined OpenAI before Aschenbrenner and said what impressed him was Aschenbrenner’s instinct to act. Aaronson had been working on watermarking ChatGPT outputs to make AI-generated text easier to identify. “I had a proposal for how to do that, but the idea was just sort of languishing,” he said. “Leopold immediately started saying, ‘Yes, we should be doing this, I’m going to take responsibility for pushing it.’”
Others remembered him differently, as politically clumsy and sometimes arrogant. “He was never afraid to be astringent at meetings or piss off the higher-ups, to a degree I found alarming,” said one current OpenAI researcher. A former OpenAI staffer, who said they first became aware of Aschenbrenner when he gave a talk at a company all-hands meeting that previewed themes he would later publish in Situational Awareness, recalled him as “a bit abrasive.” Multiple researchers also described a holiday party where, in a casual group discussion, Aschenbrenner told then Scale AI CEO Alexandr Wang how many GPUs OpenAI had—“just straight out in the open,” as one put it. Two people told Fortune they had directly overheard the remark. A number of people were taken aback, they explained, at how casually Aschenbrenner shared something so sensitive. Through spokespeople, both Wang and Aschenbrenner denied that the exchange occurred.
“This account is entirely false,” a representative of Aschenbrenner told Fortune. “Leopold never discussed private information with Alex. Leopold often discusses AI scaling trends such as in Situational Awareness, based on public information and industry trends.”
In April 2024, OpenAI fired Aschenbrenner, officially citing the leaking of internal information (the incident was not related to the alleged GPU remarks to Wang). On the Dwarkesh podcast two months later, Aschenbrenner maintained the “l(fā)eak” was “a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI” that he shared with three external researchers for feedback—something he said was “totally normal” at OpenAI at the time. He argued that an earlier memo in which he said OpenAI’s security was “egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors” was the real reason for his dismissal.
According to news reports, OpenAI did respond, via a spokesperson, that the concerns about security that he raised internally (including to the board) “did not lead to his separation.” The spokesperson also said they “disagree with many of the claims he has since made” about OpenAI’s security and the circumstances of his departure.
Either way, Aschenbrenner’s ouster came amid broader turmoil: Within weeks, OpenAI’s “superalignment” team—led by OpenAI’s cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, and where Aschenbrenner had worked—dissolved after both leaders departed from the company.
Two months later, Aschenbrenner published Situational Awareness and unveiled his hedge fund. The speed of the rollout prompted speculation among some former colleagues that he had been laying the groundwork while still at OpenAI.
Returns vs. rhetoric
Even skeptics acknowledge the market has rewarded Aschenbrenner for channeling today’s AGI hype, but still, doubts linger. “I can’t think of anybody that would trust somebody that young with no prior fund management [experience],” said a former OpenAI colleague who is now a founder. “I would not be an LP in a fund drawn by a child unless I felt there was really strong governance in place.”
Others question the ethics of profiting from AI fears. “Many agree with Leopold’s arguments, but disapprove of stoking the U.S.-China race or raising money based off AGI hype, even if the hype is justified,” said one former OpenAI researcher. “Either he no longer thinks that [the existential risk from AI] is a big deal or he is arguably being disingenuous,” said another.
One former strategist within the Effective Altruism community said many in that world “are annoyed with him,” particularly for promoting the narrative that there’s a “race to AGI” that “becomes a self-fulfilling prophecy.” While profiting from stoking the idea of an arms race can be rationalized—since Effective Altruists often view making money for the purpose of then giving it away as virtuous—the former strategist argued that “at the level of Leopold’s fund, you’re meaningfully providing capital,” and that carries more moral weight.
The deeper worry, said Aaronson, is that Aschenbrenner’s message—that the U.S. must accelerate the pace of AI development at all costs in order to beat China—has landed in Washington at a moment when accelerationist voices like Marc Andreessen, David Sacks, and Michael Kratsios are ascendant. “Even if Leopold doesn’t believe that, his essay will be used by people who do,” Aaronson said. If so, his biggest legacy may not be a hedge fund, but a broader intellectual framework that is helping to cement a technological Cold War between the U.S. and China.
If that proves true, Aschenbrenner’s real impact may be less about returns and more about rhetoric—the way his ideas have rippled from Silicon Valley into Washington. It underscores the paradox at the center of his story: To some, he’s a genius who saw the moment more clearly than anyone else. To others, he’s a Machiavellian figure who repackaged insider safety worries into an investor pitch. Either way, billions are now riding on whether his bet on AGI delivers.