日韩中文字幕在线一区二区三区,亚洲热视频在线观看,久久精品午夜一区二区福利,精品一区二区三区在线观看l,麻花传媒剧电影,亚洲香蕉伊综合在人在线,免费av一区二区三区在线,亚洲成在线人视频观看
          首頁 500強 活動 榜單 商業 科技 商潮 專題 品牌中心
          雜志訂閱

          人工智能質疑潮正在印證一位研究者多年來的警告

          Nick Lichtenberg
          2025-08-29

          GPT-5的失望表現成為一個關鍵節點,但并非唯一的預警信號。

          文本設置
          小號
          默認
          大號
          Plus(0條)

          加里·馬庫斯(Gary Marcus)。圖片來源:Ramsey Cardy/Web Summit via Sportsfile via Getty Images

          首先,薩姆·奧爾特曼(Sam Altman)親口承認,OpenAI“徹底搞砸了”GPT-5的發布。隨后,奧爾特曼在與記者共進晚餐時拋出了那個以“B”開頭的詞——“泡沫”。 《The Verge》援引這位OpenAI首席執行官的話報道稱,“當泡沫出現時,聰明人往往會因為一點真實的東西而過度興奮。”接著,麻省理工學院(MIT)的一項大規模調查給出了一個觸目驚心的數字:多達95%的企業生成式人工智能試點項目都以失敗告終。

          科技股隨之遭拋售。投資者情緒動蕩,令標普500指數的市值蒸發1萬億美元。該指數日益被轉型為“AI股票”的科技股主導,這無疑預示著:人工智能熱潮正滑向“互聯網泡沫2.0”。當然,推動市場波動的因素并不僅限于對AI交易的擔憂。上周五,隨著美聯儲主席杰羅姆·鮑威爾(Jerome Powell)在懷俄明州杰克遜霍爾發表“鴿派”言論后,標普500指數終結了連續五日的下跌。即便鮑威爾僅對9月降息釋放出一絲開放信號,市場便立刻聞風而動。

          自2019年以來,加里·馬庫斯(Gary Marcus)就不斷警告大型語言模型(LLMs)存在局限性;自2023年起,他更是直言人工智能可能存在泡沫以及經濟隱患。他的言論之所以別具分量,源于他在業內的特殊身份。這位從認知科學家轉型為資深人工智能研究者的人士,自2015年起便投身機器學習領域。同年,他創立了 Geometric Intelligence。2016年,該公司被優步(Uber)收購。馬庫斯隨后離開,在其他人工智能初創公司任職的同時,也始終直言不諱地批評那些在他看來注定走入死胡同的人工智能發展路徑。

          不過,馬庫斯并不認為自己是“卡珊德拉”,也無意扮演這樣的角色,他在接受《財富》采訪時這樣表示。卡珊德拉是希臘悲劇中的人物,她能預言未來,卻始終不被相信,直到一切為時已晚。“我認為自己是一個現實主義者,一個能預見問題并且判斷無誤的人。”

          在他看來,市場的動蕩首要源于GPT-5。他指出,這并非失敗,但“表現平平”、“令人失望”,這“確實讓很多人猛然清醒。”他補充道:“GPT-5在銷售時幾乎被包裝成通用人工智能(AGI),但事實并非如此。”所謂通用人工智能,是指具有人類般推理能力的假想人工智能。“它并不是一個糟糕的模型,不能說它不好,”馬庫斯說,但“它并不是很多人所期待的那種跨越式飛躍。”

          馬庫斯表示,這對任何真正關注該領域的人來說并不是新聞。早在2022年,他就斷言“深度學習正面臨瓶頸”。事實上,他一直在其Substack上公開討論生成式人工智能泡沫何時會破裂。他對《財富》直言,如今“群體心理”確實在發揮作用,他幾乎每天都會想起約翰·梅納德·凱恩斯(John Maynard Keynes)的一句名言:“市場保持非理性的時間,往往比你維持清償能力的時間更長。”或者說,就像卡通片《樂一通》里的威利狼(Wile E. Coyote),追逐嗶嗶鳥(Road Runner)時沖出懸崖邊,先在空中停頓片刻,最終才掉落下去。

          “我感覺就是這樣,”馬庫斯說。“我們已經沖出了懸崖。這一切毫無道理。而過去幾天的一些跡象表明,人們終于開始意識到這一點。”

          預警信號呈現

          關于泡沫的討論在7月迅速升溫。當時,阿波羅全球管理公司(Apollo Global Management)首席經濟學家、在華爾街廣受關注的意見領袖托爾斯滕·斯洛克(Torsten Slok)發表了一項引人注目的測算。他雖未在其中直接宣布泡沫已經形成,但卻寫道:“當前標普500指數中前十家公司的估值溢價已超過1990年代IT泡沫時期。”他警告稱,英偉達、微軟、蘋果和Meta等公司的未來市盈率與龐大市值已“與其實際盈利能力嚴重脫節”。

          此后數周,GPT-5的失望表現成為一個關鍵節點,但并非唯一的預警信號。另一個令人擔憂的跡象是巨額資金正涌入數據中心建設,以支撐對未來人工智能需求的種種理論設想。斯洛克也關注到了這一問題,他發現,2025年上半年,數據中心投資對GDP增長的拉動作用竟已與消費支出相當——而消費支出占GDP比重高達70%。(《華爾街日報》的克里斯托弗·米姆斯(Christopher Mims)幾周前已給出類似測算。)8月19日,谷歌前首席執行官埃里克·施密特(Eric Schmidt)在《紐約時報》聯合撰文稱,“實現通用人工智能究竟需要多長時間,依然存在不確定性”,引發廣泛討論。

          政治學家亨利·法瑞爾(Henry Farrell)認為,這無疑是一種重大轉向。他曾于今年1月在《金融時報》撰文稱,施密特是推動“新華盛頓共識”的關鍵聲音之一,而這一共識部分建立在“通用人工智能即將到來”的預期之上。法瑞爾在其Substack上寫道,施密特的這篇評論文章表明,他此前的假設“明顯在崩塌”。不過,他也補充道,這一判斷主要基于他與熟悉華盛頓外交與科技政策交叉領域人士的非正式交流。法瑞爾為這篇文章擬的標題是《科技單邊主義的黃昏》。他在結尾寫道:“如果通用人工智能的賭注落空,那么支撐這一共識的許多邏輯也將隨之崩塌——而這正是埃里克·施密特似乎正在得出的結論。”

          2025年夏天,輿論風向轉向對人工智能的強烈質疑。5月,達雷爾·韋斯特(Darrell West)在布魯金斯學會撰文警告稱,公眾和科學界很快將轉而對抗那些自詡為人工智能領域的“宇宙主宰者”。隨后,《Fast Company》預測,這個夏天將充斥著“AI糟粕”。8月初,Axios發現“clunker”(破機器)這一俚語被廣泛用來形容人工智能的各種失誤,尤其是在客戶服務領域頻頻出錯的案例。

          歷史經驗表明:短期陣痛,長期收益

          《金融時報》的約翰·索恩希爾(John Thornhill)在探討泡沫問題時提醒讀者,既要為應對崩盤做好準備,但同時也要為迎接人工智能未來的“黃金時代”蓄力。他特別強調了數據中心的建設規模——2024年和2025年,科技巨頭在這方面的投資高達7500億美元,而到2029年,全球相關投入預計將達到3萬億美元。索恩希爾援引金融史學家的觀點,認為這種狂熱的投資模式往往會引發泡沫、劇烈的市場崩盤以及“創造性破壞”。但歷史一再證明,最終仍會沉淀出真正持久的價值。

          他指出,卡洛塔·佩雷斯(Carlota Perez)在《技術革命與金融資本:泡沫與黃金時代的動力學》中記錄了這一模式。她認為,人工智能是自18世紀末以來延續這一規律的第五次技術革命。正是這一連串演變,造就了現代經濟中的鐵路基礎設施和個人計算機等成果。每一次技術革命都曾經歷泡沫與崩盤。索恩希爾雖未在這篇專欄中引用其觀點,但愛德華·錢瑟勒(Edward Chancellor)在其經典著作《魔鬼帶走最后的人》(Devil Take The Hindmost)中同樣描繪了類似的模式。這本書不僅詳細探討了泡沫機制,更曾提前預測了互聯網泡沫爆發。

          2024年11月,阿卡迪安資產管理公司(Acadian Asset Management)的歐文·拉蒙特(Owen Lamont)引用錢瑟勒的觀點,認為市場已經跨過一個關鍵泡沫時刻:大量市場參與者涌現,一方面聲稱價格過高,另一方面卻堅信價格仍會繼續上漲。

          華爾街態度謹慎,但未直接宣稱泡沫形成

          華爾街投行大多未直接宣稱泡沫形成。摩根士丹利近期發布的一份報告指出,人工智能將為企業帶來巨大效率提升,預計標普500企業每年可因此節省9200億美元。瑞銀集團(UBS)則對麻省理工學院這項引發廣泛報道的研究中所警示的風險表示認同。該機構警告投資者,隨著數據中心的擴張,市場可能會經歷一段“資本開支消化不良期”。但瑞銀同時強調,人工智能應用正遠超預期,并列舉了OpenAI的ChatGPT、谷歌母公司Alphabet的Gemini以及由人工智能驅動的客戶關系管理系統等案例,表明其商業化潛力正在日益釋放。

          美國銀行研究部在8月初(GPT-5發布之前)發布的一份報告中指出,人工智能正推動勞動力生產率發生“巨變”,將為標普500企業持續帶來“創新溢價”。該行美國股票策略主管薩維塔·蘇布拉馬尼安(Savita Subramanian)認為,2020年代的通脹浪潮讓企業學會了“以少博多”,即將人力轉化為流程化運作,而人工智能將進一步加速這一趨勢。她在接受《財富》采訪時表示:“我認為標普500本身未必存在泡沫。”隨后又補充道:“但其他一些領域已顯現泡沫跡象。”

          蘇布拉馬尼安提到,小企業以及私人信貸領域,可能存在“估值調整過于激進”的風險。她同樣擔心企業在數據中心建設上投入過度,指出這意味著行業回歸重資產模式,背離了當前美國經濟中表現最突出的公司所奉行的輕資產戰略。

          “這確實是新情況,”她說。“科技業曾經高度輕資產,專注研發和創新,如今卻斥巨資建設數據中心。”她補充道,這可能會終結其輕資產、高利潤率的發展模式,使其轉變為“重資產、越來越像制造業”的企業。在她看來,這種轉變理應對應更低的市場估值倍數。當被問及這是否等同于泡沫,或者至少是一種修正時,她表示:“這種情況已經在某些領域開始發生。”她也認為可以將其與當年的鐵路熱潮相提并論。

          數學與機器中的幽靈

          加里·馬庫斯(Gary Marcus)還從數學基本面的角度表達了擔憂。目前,近500家人工智能獨角獸企業的總估值已逼近2.7萬億美元。“與實際營收相比,這根本說不通,”他說。馬庫斯舉例稱,OpenAI在7月報告的營收為10億美元,但仍未實現盈利。他推測OpenAI約占人工智能市場半壁江山,進而粗略估算出全行業年營收約為250億美元。“這當然不是一個小數目,但實現成本卻極高,而目前行業總投資已達數萬億美元。”

          那么,如果馬庫斯是對的,為什么這些年來人們一直沒有聽進去?他表示自己多年來不斷發出警告。在2019年出版的著作《重啟AI》(Rebooting AI)中,他將此稱為“輕信鴻溝”;早在2012年,他就在《紐約客》撰文指出,深度學習并非登月天梯。前25年的認知科學研究使他深知“人類擬人化本能”:“人們凝視機器時,誤將并不存在的智能和人性強加到機器上,最終會把這些機器當作伴侶,并誤以為自己距離真正解決問題已經不遠了。”他認為,如今人工智能泡沫膨脹至此,很大程度上源于這種人類將自我投射到外物之上的本能,而認知科學家恰恰受過訓練,能夠規避此陷阱。

          這些機器或許看起來像是人類,但“運作方式實際上與人完全不同,”馬庫斯說。他補充道:“整個市場的根基,正是建立在人們未能理解這一點之上。人們誤以為單靠規模擴張就能解決所有問題,卻根本沒有理解問題本質。在我看來,這幾乎就是一場悲劇。”

          蘇布拉馬尼安(Subramanian)則坦言,“人們之所以熱愛人工智能技術,是因為它像魔法,給人一種神秘奇幻的感覺……但事實是,它尚未改變世界,不過我認為我們不能因此忽視它。” 她本人也深陷其中。“我現在用ChatGPT比我的孩子們還多。說真的,這個變化挺有意思的。我現在事事都依賴ChatGPT。”(財富中文網)

          譯者:劉進龍

          審校:汪皓

          首先,薩姆·奧爾特曼(Sam Altman)親口承認,OpenAI“徹底搞砸了”GPT-5的發布。隨后,奧爾特曼在與記者共進晚餐時拋出了那個以“B”開頭的詞——“泡沫”。 《The Verge》援引這位OpenAI首席執行官的話報道稱,“當泡沫出現時,聰明人往往會因為一點真實的東西而過度興奮。”接著,麻省理工學院(MIT)的一項大規模調查給出了一個觸目驚心的數字:多達95%的企業生成式人工智能試點項目都以失敗告終。

          科技股隨之遭拋售。投資者情緒動蕩,令標普500指數的市值蒸發1萬億美元。該指數日益被轉型為“AI股票”的科技股主導,這無疑預示著:人工智能熱潮正滑向“互聯網泡沫2.0”。當然,推動市場波動的因素并不僅限于對AI交易的擔憂。上周五,隨著美聯儲主席杰羅姆·鮑威爾(Jerome Powell)在懷俄明州杰克遜霍爾發表“鴿派”言論后,標普500指數終結了連續五日的下跌。即便鮑威爾僅對9月降息釋放出一絲開放信號,市場便立刻聞風而動。

          自2019年以來,加里·馬庫斯(Gary Marcus)就不斷警告大型語言模型(LLMs)存在局限性;自2023年起,他更是直言人工智能可能存在泡沫以及經濟隱患。他的言論之所以別具分量,源于他在業內的特殊身份。這位從認知科學家轉型為資深人工智能研究者的人士,自2015年起便投身機器學習領域。同年,他創立了 Geometric Intelligence。2016年,該公司被優步(Uber)收購。馬庫斯隨后離開,在其他人工智能初創公司任職的同時,也始終直言不諱地批評那些在他看來注定走入死胡同的人工智能發展路徑。

          不過,馬庫斯并不認為自己是“卡珊德拉”,也無意扮演這樣的角色,他在接受《財富》采訪時這樣表示。卡珊德拉是希臘悲劇中的人物,她能預言未來,卻始終不被相信,直到一切為時已晚。“我認為自己是一個現實主義者,一個能預見問題并且判斷無誤的人。”

          在他看來,市場的動蕩首要源于GPT-5。他指出,這并非失敗,但“表現平平”、“令人失望”,這“確實讓很多人猛然清醒。”他補充道:“GPT-5在銷售時幾乎被包裝成通用人工智能(AGI),但事實并非如此。”所謂通用人工智能,是指具有人類般推理能力的假想人工智能。“它并不是一個糟糕的模型,不能說它不好,”馬庫斯說,但“它并不是很多人所期待的那種跨越式飛躍。”

          馬庫斯表示,這對任何真正關注該領域的人來說并不是新聞。早在2022年,他就斷言“深度學習正面臨瓶頸”。事實上,他一直在其Substack上公開討論生成式人工智能泡沫何時會破裂。他對《財富》直言,如今“群體心理”確實在發揮作用,他幾乎每天都會想起約翰·梅納德·凱恩斯(John Maynard Keynes)的一句名言:“市場保持非理性的時間,往往比你維持清償能力的時間更長。”或者說,就像卡通片《樂一通》里的威利狼(Wile E. Coyote),追逐嗶嗶鳥(Road Runner)時沖出懸崖邊,先在空中停頓片刻,最終才掉落下去。

          “我感覺就是這樣,”馬庫斯說。“我們已經沖出了懸崖。這一切毫無道理。而過去幾天的一些跡象表明,人們終于開始意識到這一點。”

          預警信號呈現

          關于泡沫的討論在7月迅速升溫。當時,阿波羅全球管理公司(Apollo Global Management)首席經濟學家、在華爾街廣受關注的意見領袖托爾斯滕·斯洛克(Torsten Slok)發表了一項引人注目的測算。他雖未在其中直接宣布泡沫已經形成,但卻寫道:“當前標普500指數中前十家公司的估值溢價已超過1990年代IT泡沫時期。”他警告稱,英偉達、微軟、蘋果和Meta等公司的未來市盈率與龐大市值已“與其實際盈利能力嚴重脫節”。

          此后數周,GPT-5的失望表現成為一個關鍵節點,但并非唯一的預警信號。另一個令人擔憂的跡象是巨額資金正涌入數據中心建設,以支撐對未來人工智能需求的種種理論設想。斯洛克也關注到了這一問題,他發現,2025年上半年,數據中心投資對GDP增長的拉動作用竟已與消費支出相當——而消費支出占GDP比重高達70%。(《華爾街日報》的克里斯托弗·米姆斯(Christopher Mims)幾周前已給出類似測算。)8月19日,谷歌前首席執行官埃里克·施密特(Eric Schmidt)在《紐約時報》聯合撰文稱,“實現通用人工智能究竟需要多長時間,依然存在不確定性”,引發廣泛討論。

          政治學家亨利·法瑞爾(Henry Farrell)認為,這無疑是一種重大轉向。他曾于今年1月在《金融時報》撰文稱,施密特是推動“新華盛頓共識”的關鍵聲音之一,而這一共識部分建立在“通用人工智能即將到來”的預期之上。法瑞爾在其Substack上寫道,施密特的這篇評論文章表明,他此前的假設“明顯在崩塌”。不過,他也補充道,這一判斷主要基于他與熟悉華盛頓外交與科技政策交叉領域人士的非正式交流。法瑞爾為這篇文章擬的標題是《科技單邊主義的黃昏》。他在結尾寫道:“如果通用人工智能的賭注落空,那么支撐這一共識的許多邏輯也將隨之崩塌——而這正是埃里克·施密特似乎正在得出的結論。”

          2025年夏天,輿論風向轉向對人工智能的強烈質疑。5月,達雷爾·韋斯特(Darrell West)在布魯金斯學會撰文警告稱,公眾和科學界很快將轉而對抗那些自詡為人工智能領域的“宇宙主宰者”。隨后,《Fast Company》預測,這個夏天將充斥著“AI糟粕”。8月初,Axios發現“clunker”(破機器)這一俚語被廣泛用來形容人工智能的各種失誤,尤其是在客戶服務領域頻頻出錯的案例。

          歷史經驗表明:短期陣痛,長期收益

          《金融時報》的約翰·索恩希爾(John Thornhill)在探討泡沫問題時提醒讀者,既要為應對崩盤做好準備,但同時也要為迎接人工智能未來的“黃金時代”蓄力。他特別強調了數據中心的建設規模——2024年和2025年,科技巨頭在這方面的投資高達7500億美元,而到2029年,全球相關投入預計將達到3萬億美元。索恩希爾援引金融史學家的觀點,認為這種狂熱的投資模式往往會引發泡沫、劇烈的市場崩盤以及“創造性破壞”。但歷史一再證明,最終仍會沉淀出真正持久的價值。

          他指出,卡洛塔·佩雷斯(Carlota Perez)在《技術革命與金融資本:泡沫與黃金時代的動力學》中記錄了這一模式。她認為,人工智能是自18世紀末以來延續這一規律的第五次技術革命。正是這一連串演變,造就了現代經濟中的鐵路基礎設施和個人計算機等成果。每一次技術革命都曾經歷泡沫與崩盤。索恩希爾雖未在這篇專欄中引用其觀點,但愛德華·錢瑟勒(Edward Chancellor)在其經典著作《魔鬼帶走最后的人》(Devil Take The Hindmost)中同樣描繪了類似的模式。這本書不僅詳細探討了泡沫機制,更曾提前預測了互聯網泡沫爆發。

          2024年11月,阿卡迪安資產管理公司(Acadian Asset Management)的歐文·拉蒙特(Owen Lamont)引用錢瑟勒的觀點,認為市場已經跨過一個關鍵泡沫時刻:大量市場參與者涌現,一方面聲稱價格過高,另一方面卻堅信價格仍會繼續上漲。

          華爾街態度謹慎,但未直接宣稱泡沫形成

          華爾街投行大多未直接宣稱泡沫形成。摩根士丹利近期發布的一份報告指出,人工智能將為企業帶來巨大效率提升,預計標普500企業每年可因此節省9200億美元。瑞銀集團(UBS)則對麻省理工學院這項引發廣泛報道的研究中所警示的風險表示認同。該機構警告投資者,隨著數據中心的擴張,市場可能會經歷一段“資本開支消化不良期”。但瑞銀同時強調,人工智能應用正遠超預期,并列舉了OpenAI的ChatGPT、谷歌母公司Alphabet的Gemini以及由人工智能驅動的客戶關系管理系統等案例,表明其商業化潛力正在日益釋放。

          美國銀行研究部在8月初(GPT-5發布之前)發布的一份報告中指出,人工智能正推動勞動力生產率發生“巨變”,將為標普500企業持續帶來“創新溢價”。該行美國股票策略主管薩維塔·蘇布拉馬尼安(Savita Subramanian)認為,2020年代的通脹浪潮讓企業學會了“以少博多”,即將人力轉化為流程化運作,而人工智能將進一步加速這一趨勢。她在接受《財富》采訪時表示:“我認為標普500本身未必存在泡沫。”隨后又補充道:“但其他一些領域已顯現泡沫跡象。”

          蘇布拉馬尼安提到,小企業以及私人信貸領域,可能存在“估值調整過于激進”的風險。她同樣擔心企業在數據中心建設上投入過度,指出這意味著行業回歸重資產模式,背離了當前美國經濟中表現最突出的公司所奉行的輕資產戰略。

          “這確實是新情況,”她說。“科技業曾經高度輕資產,專注研發和創新,如今卻斥巨資建設數據中心。”她補充道,這可能會終結其輕資產、高利潤率的發展模式,使其轉變為“重資產、越來越像制造業”的企業。在她看來,這種轉變理應對應更低的市場估值倍數。當被問及這是否等同于泡沫,或者至少是一種修正時,她表示:“這種情況已經在某些領域開始發生。”她也認為可以將其與當年的鐵路熱潮相提并論。

          數學與機器中的幽靈

          加里·馬庫斯(Gary Marcus)還從數學基本面的角度表達了擔憂。目前,近500家人工智能獨角獸企業的總估值已逼近2.7萬億美元。“與實際營收相比,這根本說不通,”他說。馬庫斯舉例稱,OpenAI在7月報告的營收為10億美元,但仍未實現盈利。他推測OpenAI約占人工智能市場半壁江山,進而粗略估算出全行業年營收約為250億美元。“這當然不是一個小數目,但實現成本卻極高,而目前行業總投資已達數萬億美元。”

          那么,如果馬庫斯是對的,為什么這些年來人們一直沒有聽進去?他表示自己多年來不斷發出警告。在2019年出版的著作《重啟AI》(Rebooting AI)中,他將此稱為“輕信鴻溝”;早在2012年,他就在《紐約客》撰文指出,深度學習并非登月天梯。前25年的認知科學研究使他深知“人類擬人化本能”:“人們凝視機器時,誤將并不存在的智能和人性強加到機器上,最終會把這些機器當作伴侶,并誤以為自己距離真正解決問題已經不遠了。”他認為,如今人工智能泡沫膨脹至此,很大程度上源于這種人類將自我投射到外物之上的本能,而認知科學家恰恰受過訓練,能夠規避此陷阱。

          這些機器或許看起來像是人類,但“運作方式實際上與人完全不同,”馬庫斯說。他補充道:“整個市場的根基,正是建立在人們未能理解這一點之上。人們誤以為單靠規模擴張就能解決所有問題,卻根本沒有理解問題本質。在我看來,這幾乎就是一場悲劇。”

          蘇布拉馬尼安(Subramanian)則坦言,“人們之所以熱愛人工智能技術,是因為它像魔法,給人一種神秘奇幻的感覺……但事實是,它尚未改變世界,不過我認為我們不能因此忽視它。” 她本人也深陷其中。“我現在用ChatGPT比我的孩子們還多。說真的,這個變化挺有意思的。我現在事事都依賴ChatGPT。”(財富中文網)

          譯者:劉進龍

          審校:汪皓

          First it was the release of GPT-5 that OpenAI “totally screwed up,” according to Sam Altman. Then Altman followed that up by saying the B-word at a dinner with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge reported on comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: a whopping 95% of generative AI pilots at companies are failing.

          A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by $1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning into dotcom bubble 2.0. To be sure, fears about the AI trade aren’t the only factor moving markets, as evidenced by the S&P 500 snapping a five-day losing streak on Friday after Jerome Powell’s quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair toward a September rate cut set markets on a tear.

          Gary Marcus has been warning of the limits of large language models (LLMs) since 2019 and warning of a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned longtime AI researcher has been active in the machine learning space since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly afterward, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI space.

          Still, Marcus doesn’t see himself as a “Cassandra,” and he’s not trying to be, he told Fortune in an interview. Cassandra, a figure from Greek tragedy, was a character who uttered accurate prophecies but wasn’t believed until it was too late. “I see myself as a realist and as someone who foresaw the problems and was correct about them.”

          Marcus attributes the wobble in markets to GPT-5 above all. It’s not a failure, he said, but it’s “underwhelming,” a “disappointment,” and that’s “really woken a lot of people up. You know, GPT-5 was sold, basically, as AGI, and it just isn’t,” he added, referencing artificial general intelligence, a hypothetical AI with human-like reasoning abilities. “It’s not a terrible model, it’s not like it’s bad,” he said, but “it’s not the quantum leap that a lot of people were led to expect.”

          Marcus said this shouldn’t be news to anyone paying attention, as he argued in 2022 that “deep learning is hitting a wall.” To be sure, Marcus has been wondering openly on his Substack on when the generative AI bubble will deflate. He told Fortune that “crowd psychology” is definitely taking place, and he thinks every day about the John Maynard Keynes quote: “The market can stay irrational longer than you can stay solvent,” or Looney Tunes’s Wile E. Coyote following Road Runner off the edge of a cliff and hanging in midair, before falling down to Earth.

          “That’s what I feel like,” Marcus says. “We are off the cliff. This does not make sense. And we get some signs from the last few days that people are finally noticing.”

          Building warning signs

          The bubble talk began heating up in July, when Apollo Global Management’s chief economist, Torsten Slok, widely read and influential on Wall Street, issued a striking calculation while falling short of declaring a bubble. “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” he wrote, warning that the forward P/E ratios and staggering market capitalizations of companies such as Nvidia, Microsoft, Apple, and Meta had “become detached from their earnings.”

          In the weeks since, the disappointment of GPT-5 was an important development, but not the only one. Another warning sign is the massive amount of spending on data centers to support all the theoretical future demand for AI use. Slok has tackled this subject as well, finding that data center investments’ contribution to GDP growth has been the same as consumer spending over the first half of 2025, which is notable since consumer spending makes up 70% of GDP. (The Wall Street Journal‘s Christopher Mims had offered the calculation weeks earlier.) Finally, on August 19, former Google CEO Eric Schmidt co-authored a widely discussed New York Times op-ed on August 19, arguing that “it is uncertain how soon artificial general intelligence can be achieved.”

          This is a significant about-face, according to political scientist Henry Farrell, who argued in the Financial Times in January that Schmidt was a key voice shaping the “New Washington Consensus,” predicated in part on AGI being “right around the corner.” On his Substack, Farrell said Schmidt’s op-ed shows that his prior set of assumptions are “visibly crumbling away,” while caveating that he had been relying on informal conversations with people he knew in the intersection of D.C. foreign policy and tech policy. Farrell’s title for that post: “The twilight of tech unilateralism.” He concluded: “If the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to.”

          Finally, the vibe is shifting in the summer of 2025 into a mounting AI backlash. Darrell West warned in Brookings in May that the tide of both public and scientific opinion would soon turn against AI’s masters of the universe. Soon after, Fast Company predicted the summer would be full of “AI slop.” By early August, Axios had identified the slang “clunker” being applied widely to AI mishaps, particularly in customer service gone awry.

          History says: short-term pain, long-term gain

          John Thornhill of the Financial Times offered some perspective on the bubble question, advising readers to brace themselves for a crash, but to prepare for a future “golden age” of AI nonetheless. He highlights the data center buildout—a staggering $750 billion investment from Big Tech over 2024 and 2025, and part of a global rollout projected to hit $3 trillion by 2029. Thornhill turns to financial historians for some comfort and some perspective. Over and over, it shows that this type of frenzied investment typically triggers bubbles, dramatic crashes, and creative destruction—but that eventually durable value is realized.

          He notes that Carlota Perez documented this pattern in Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. She identified AI as the fifth technological revolution to follow the pattern begun in the late 18th century, as a result of which the modern economy now has railroad infrastructure and personal computers, among other things. Each one had a bubble and a crash at some point. Thornhill didn’t cite him in this particular column, but Edward Chancellor documented similar patterns in his classic Devil Take The Hindmost, a book notable not just for its discussions of bubbles but for predicting the dotcom bubble before it happened.

          Owen Lamont of Acadian Asset Management cited Chancellor in November 2024, when he argued that a key bubble moment had been passed: an unusually large number of market participants saying that prices are too high, but insisting that they’re likely to rise further.

          Wall Street is cautious, but not calling a bubble

          Wall Street banks are largely not calling for a bubble. Morgan Stanley released a note recently seeing huge efficiencies ahead for companies as a result of AI: $920 billion per year for the S&P 500. UBS, for its part, concurred with the caution flagged in the news-making MIT research. It warned investors to expect a period of “capex indigestion” accompanying the data center buildout, but it also maintained that AI adoption is expanding far beyond expectations, citing growing monetization from OpenAI’s ChatGPT, Alphabet’s Gemini, and AI-powered CRM systems.

          Bank of America Research wrote a note in early August, before the launch of GPT-5, seeing AI as part of a worker productivity “sea change” that will drive an ongoing “innovation premium” for S&P 500 firms. Head of U.S. Equity Strategy Savita Subramanian essentially argued that the inflation wave of the 2020s taught companies to do more with less, to turn people into processes, and that AI will turbo-charge this. “I don’t think it’s necessarily a bubble in the S&P 500,” she told Fortune in an interview, before adding, “I think there are other areas where it’s becoming a little bit bubble-like.”

          Subramanian mentioned smaller companies and potentially private lending as areas “that potentially have re-rated too aggressively.” She’s also concerned about the risk of companies diving into data centers too such a great extent, noting that this represents a shift back toward an asset-heavier approach, instead of the asset-light approach that increasingly distinguishes top performance in the U.S. economy.

          “I mean, this is new,” she said. “Tech used to be very asset-light and just spent money on R&D and innovation, and now they’re spending money to build out these data centers,” adding that she sees it as potentially marking the end of their asset-light, high-margin existence and basically transforming them into “very asset-intensive and more manufacturing-like than they used to be.” From her perspective, that warrants a lower multiple in the stock market. When asked if that is tantamount to a bubble, if not a correction, she said “it’s starting to happen in places,” and she agrees with the comparison to the railroad boom.

          The math and the ghost in the machine

          Gary Marcus also cited the fundamentals of math as a reason that he’s concerned, with nearly 500 AI unicorns being valued at $2.7 trillion. “That just doesn’t make sense relative to how much revenue is coming [in],” he said. Marcus cited OpenAI reporting $1 billion in revenue in July, but still not being profitable. Speculating, he extrapolated that to OpenAI having roughly half the AI market, and offered a rough calculation that it means about $25 billion a year of revenue for the sector, “which is not nothing, but it costs a lot of money to do this, and there’s trillions of dollars [invested].”

          So if Marcus is correct, why haven’t people been listening to him for years? He said he’s been warning people about this for years, too, calling it the “gullibility gap” in his 2019 book Rebooting AI and arguing in The New Yorker in 2012 that deep learning was a ladder that wouldn’t reach the moon. For the first 25 years of his career, Marcus trained and practiced as a cognitive scientist, and learned about the “anthropomorphization people do. … [they] look at these machines and make the mistake of attributing to them an intelligence that is not really there, a humanness that is not really there, and they wind up using them as a companion, and they wind up thinking that they’re closer to solving these problems than they actually are.” He said he thinks the bubble inflating to its current extent is in large part because of the human impulse to project ourselves onto things, something a cognitive scientist is trained not to do.

          These machines might seem like they’re human, but “they don’t actually work like you,” Marcus said, adding, “this entire market has been based on people not understanding that, imagining that scaling was going to solve all of this, because they don’t really understand the problem. I mean, it’s almost tragic.”

          Subramanian, for her part, said she thinks “people love this AI technology because it feels like sorcery. It feels a little magical and mystical … the truth is it hasn’t really changed the world that much yet, but I don’t think it’s something to be dismissed.” She’s also become really taken with it herself. “I’m already using ChatGPT more than my kids are. I mean, it’s kind of interesting to see this. I use ChatGPT for everything now.”

          財富中文網所刊載內容之知識產權為財富媒體知識產權有限公司及/或相關權利人專屬所有或持有。未經許可,禁止進行轉載、摘編、復制及建立鏡像等任何使用。
          0條Plus
          精彩評論
          評論

          撰寫或查看更多評論

          請打開財富Plus APP

          前往打開