
回溯往昔——更確切地說,就在今年年初——硅谷還對(duì)“通用人工智能”這一話題津津樂道、癡迷不已。
OpenAI首席執(zhí)行官薩姆·奧爾特曼1月寫道:“我們?nèi)缃駶M懷信心——知道如何打造通用人工智能。”此前,2024年末他在Y Combinator播客節(jié)目中表示,通用人工智能可能在2025年實(shí)現(xiàn),并在2024年發(fā)推稱OpenAI已在“內(nèi)部實(shí)現(xiàn)通用人工智能”。OpenAI對(duì)通用人工智能癡迷至極,以至于其銷售主管將團(tuán)隊(duì)稱為“通用人工智能向?qū)А保笆紫茖W(xué)家伊爾亞·蘇茨克維(Ilya Sutskever)帶領(lǐng)研究員圍坐在篝火旁高呼“感受通用人工智能的力量!”
作為OpenAI的合作伙伴及主要資金支持者,微軟(Microsoft)在2024年發(fā)表一篇論文,稱OpenAI的GPT-4人工智能模型已顯現(xiàn)出“通用人工智能的火花”。與此同時(shí),埃隆·馬斯克(Elon Musk)于2023年3月創(chuàng)立人工智能公司xAI,致力于打造通用人工智能,并表示這一目標(biāo)最快或于2025或2026年達(dá)成。諾貝爾獎(jiǎng)得主、谷歌DeepMind聯(lián)合創(chuàng)始人戴密斯·哈薩比斯(Demis Hassabis)對(duì)記者稱,世界“正站在通用人工智能的臨界點(diǎn)邊緣”。Meta首席執(zhí)行官馬克·扎克伯格(Mark Zuckerberg)表示,公司正致力于“構(gòu)建全面的通用智能”,以驅(qū)動(dòng)下一代產(chǎn)品與服務(wù)。Anthropic聯(lián)合創(chuàng)始人兼首席執(zhí)行官達(dá)里奧·阿莫迪(Dario Amodei)雖對(duì)“通用人工智能”這一術(shù)語心存抵觸,但也認(rèn)為“強(qiáng)大的人工智能”或?qū)⒂?027年實(shí)現(xiàn),并引領(lǐng)健康與繁榮的新紀(jì)元——前提是它不會(huì)最終毀滅人類。前谷歌首席執(zhí)行官、現(xiàn)知名科技投資人埃里克·施密特(Eric Schmidt)則在4月的一場(chǎng)演講中表示,人類將在“未來三到五年內(nèi)”擁有通用人工智能。
如今,通用人工智能的熱度正在消退——硅谷整體風(fēng)向已徹底轉(zhuǎn)向?qū)嵱弥髁x,摒棄了對(duì)烏托邦式愿景的追逐。例如,今年夏天薩姆·奧爾特曼在接受美國(guó)消費(fèi)者新聞與商業(yè)頻道(CNBC)采訪時(shí)稱,通用人工智能“并非特別實(shí)用的術(shù)語”;而此前在4月還大肆宣揚(yáng)通用人工智能的施密特,近日在《紐約時(shí)報(bào)》撰文呼吁硅谷別再癡迷于“超人類智能”,并警告稱這種癡迷會(huì)分散行業(yè)對(duì)實(shí)用技術(shù)的注意力。人工智能領(lǐng)域先驅(qū)吳恩達(dá)(Andrew Ng)與美國(guó)人工智能沙皇戴維·薩克斯(David Sacks)均認(rèn)為,通用人工智能“被過度炒作”。
通用人工智能:定義模糊,過度炒作
發(fā)生了什么?首先,我們需要了解一些背景情況。所有人都認(rèn)同AGI是“Artificial General Intelligence” (通用人工智能)的縮寫,但這幾乎是唯一的共識(shí)。人們對(duì)該術(shù)語的定義略有不同,但這種差異卻至關(guān)重要。物理學(xué)家馬克·埃夫拉姆·古布魯?shù)拢∕ark Avrum Gubrud)是最早使用這一術(shù)語的學(xué)者之一,他在1997年發(fā)表的一篇研究論文中寫道:“我所說的先進(jìn)通用人工智能,是指那些在復(fù)雜性和速度上可與人類大腦相匹敵或超越人類大腦的人工智能系統(tǒng),能夠獲取和處理通用知識(shí)并進(jìn)行推理,且本質(zhì)上可應(yīng)用于任何原本需要人類智能介入的工業(yè)或軍事操作環(huán)節(jié)。”
此后,這一術(shù)語被人工智能研究員謝恩·萊格(Shane Legg)——他后來與哈薩比斯共同創(chuàng)立了谷歌DeepMind——以及計(jì)算機(jī)科學(xué)家本·戈澤爾(Ben Goertzel)、彼得·沃斯(Peter Voss)在21世紀(jì)初采納并推廣。據(jù)沃斯所述,他們將通用人工智能定義為能夠?qū)W會(huì)“可靠執(zhí)行任何具備相應(yīng)能力的人類所能完成的認(rèn)知任務(wù)”的人工智能系統(tǒng)。但這一定義存在漏洞:例如,誰來界定“具備相應(yīng)能力的人類”的標(biāo)準(zhǔn)?此后,其他人工智能研究員提出了不同定義,認(rèn)為通用人工智能應(yīng)是能在所有任務(wù)上都達(dá)到人類專家水平的人工智能,而非僅等同于“具備基礎(chǔ)能力”的人類。OpenAI于2015年末成立,其明確使命是開發(fā)通用人工智能以“造福全人類”,并在通用人工智能定義的爭(zhēng)論中給出了自身的解讀。該公司章程指出,通用人工智能是一種自主系統(tǒng),能夠“在大多數(shù)具有經(jīng)濟(jì)價(jià)值的工作中超越人類表現(xiàn)”。
然而,無論通用人工智能的定義如何,當(dāng)下關(guān)鍵的變化在于:行業(yè)內(nèi)已不再熱衷于談?wù)撨@一概念。究其原因,一方面是,人們愈發(fā)擔(dān)憂人工智能的發(fā)展進(jìn)度或許并未如業(yè)內(nèi)人士數(shù)月前宣稱的那般“飛速推進(jìn)”;另一方面,越來越多的跡象表明,此前對(duì)通用人工智能的討論引發(fā)了過高的期待,而技術(shù)本身根本無法兌現(xiàn)。
通用人工智能熱度驟降的最大因素之一,似乎是OpenAI在8月初推出的GPT-5模型。在微軟宣稱GPT-4顯現(xiàn)出“通用人工智能的火花”兩年多之后,這款新模型的亮相卻令人失望:僅在路由架構(gòu)基礎(chǔ)上實(shí)現(xiàn)了漸進(jìn)式改進(jìn),并未帶來眾人期待的突破性進(jìn)展。參與提出通用人工智能術(shù)語的戈澤爾也提醒公眾,盡管GPT-5表現(xiàn)亮眼,但與真正的通用人工智能仍相去甚遠(yuǎn)——缺乏真正的理解能力、持續(xù)學(xué)習(xí)能力或基于實(shí)際經(jīng)驗(yàn)的能力。
考慮到薩姆·奧爾特曼此前的立場(chǎng),他如今不再使用通用人工智能相關(guān)表述尤為引人關(guān)注。OpenAI建立在通用人工智能炒作的基礎(chǔ)上:通用人工智能是該公司的創(chuàng)始使命,助其籌集數(shù)十億美元資金,也是其與微軟開展合作的基石。雙方協(xié)議中甚至包含一項(xiàng)條款:倘若OpenAI的非營(yíng)利性董事會(huì)宣布已實(shí)現(xiàn)通用人工智能,微軟獲取未來技術(shù)的權(quán)限將受到限制。據(jù)悉,在投入超130億美元后,微軟正力促刪除該條款,甚至曾考慮退出合作。《連線》雜志還報(bào)道稱,OpenAI內(nèi)部就“發(fā)表一篇關(guān)于人工智能進(jìn)度衡量的論文是否會(huì)影響公司宣布實(shí)現(xiàn)通用人工智能的能力”展開過爭(zhēng)論。
“極具積極意義”的風(fēng)向轉(zhuǎn)變
無論觀察人士認(rèn)為這一風(fēng)向轉(zhuǎn)變是營(yíng)銷手段,還是市場(chǎng)反應(yīng),許多人,尤其是企業(yè)界人士,都認(rèn)為這是件好事。Futurum Equities首席市場(chǎng)策略師沙伊·博洛爾(Shay Boloor)稱這一轉(zhuǎn)變“極具積極意義”,并指出市場(chǎng)嘉獎(jiǎng)的是執(zhí)行力,而非“未來某天實(shí)現(xiàn)超級(jí)智能”這類模糊的敘事。
其他人則強(qiáng)調(diào),真正的轉(zhuǎn)變是摒棄“單一通用人工智能幻想”,轉(zhuǎn)向特定領(lǐng)域的“超級(jí)智能”。代理式人工智能公司Landbase首席執(zhí)行官丹尼爾·薩克斯(Daniel Saks)認(rèn)為,“圍繞通用人工智能的炒作周期始終建立在‘單一、集中式人工智能將無所不知’的理念上”,但他表示這并非他所看到的情況。“未來的趨勢(shì)是去中心化、專注于特定領(lǐng)域的模型——它們能在特定領(lǐng)域?qū)崿F(xiàn)超越人類的表現(xiàn),”他向《財(cái)富》雜志表示。
數(shù)字健康平臺(tái)Lirio的首席人工智能科學(xué)家克里斯托弗·西蒙斯(Christopher Symons)則認(rèn)為,通用人工智能這一術(shù)語本就缺乏實(shí)際價(jià)值:他解釋道,那些宣揚(yáng)通用人工智能的人“將資源從更具體的應(yīng)用領(lǐng)域抽離,而在這些領(lǐng)域,人工智能的進(jìn)步能最直接地為社會(huì)創(chuàng)造福祉”。
不過,通用人工智能相關(guān)言論的退潮,并不意味著這一使命(或術(shù)語本身)已徹底消失。Anthropic與DeepMind的高管仍自稱“通用人工智能信念者”——這是行話。即便如此,該表述也存在爭(zhēng)議:對(duì)部分人而言,它指“相信通用人工智能即將到來”;對(duì)另一些人而言,它僅表示“相信人工智能模型將持續(xù)改進(jìn)”。但毋庸置疑的是,如今行業(yè)內(nèi)更多的是“謹(jǐn)慎措辭與淡化處理”,而非“在通用人工智能宣傳上加倍發(fā)力”。
仍有部分人呼吁警惕迫在眉睫的風(fēng)險(xiǎn)
對(duì)部分人而言,這種“謹(jǐn)慎措辭”恰恰凸顯出風(fēng)險(xiǎn)的緊迫性。前OpenAI研究員史蒂文·阿德勒(Steven Adler)向《財(cái)富》雜志表示:“我們不應(yīng)忽視,部分人工智能公司正明確以‘構(gòu)建超越人類智能的系統(tǒng)’為目標(biāo)。人工智能目前尚未達(dá)到這一水平,但無論你給這種系統(tǒng)冠以何種名稱,它都具有危險(xiǎn)性,需要我們以極其嚴(yán)肅的態(tài)度對(duì)待。”
其他人指責(zé)人工智能領(lǐng)域的領(lǐng)導(dǎo)者對(duì)通用人工智能的態(tài)度轉(zhuǎn)變實(shí)則是在“混淆視聽”,目的是規(guī)避監(jiān)管。未來生命研究所(Future of Life Institute)所長(zhǎng)麥克斯·泰格馬克(Max Tegmark)表示,奧爾特曼稱通用人工智能是“無用術(shù)語”并非出于科學(xué)層面的謙遜,而是該公司為持續(xù)打造更強(qiáng)大模型而試圖規(guī)避監(jiān)管的一種手段。
“對(duì)他們而言,僅在私下與投資者談?wù)撏ㄓ萌斯ぶ悄苁歉髦堑倪x擇,”他向《財(cái)富》雜志補(bǔ)充道,“這就像可卡因販子聲稱‘不清楚可卡因是否真能算作毒品’一樣”——因?yàn)樗珡?fù)雜、太難界定了。
無論是將其稱作通用人工智能還是賦予其他稱謂——炒作或許會(huì)消退,風(fēng)向或許會(huì)轉(zhuǎn)變,但其牽涉甚廣:從資金、就業(yè)到安全保障,關(guān)于這場(chǎng)競(jìng)賽將走向何方的核心問題,才剛剛浮出水面。(財(cái)富中文網(wǎng))
譯者:中慧言-王芳
回溯往昔——更確切地說,就在今年年初——硅谷還對(duì)“通用人工智能”這一話題津津樂道、癡迷不已。
OpenAI首席執(zhí)行官薩姆·奧爾特曼1月寫道:“我們?nèi)缃駶M懷信心——知道如何打造通用人工智能。”此前,2024年末他在Y Combinator播客節(jié)目中表示,通用人工智能可能在2025年實(shí)現(xiàn),并在2024年發(fā)推稱OpenAI已在“內(nèi)部實(shí)現(xiàn)通用人工智能”。OpenAI對(duì)通用人工智能癡迷至極,以至于其銷售主管將團(tuán)隊(duì)稱為“通用人工智能向?qū)А保笆紫茖W(xué)家伊爾亞·蘇茨克維(Ilya Sutskever)帶領(lǐng)研究員圍坐在篝火旁高呼“感受通用人工智能的力量!”
作為OpenAI的合作伙伴及主要資金支持者,微軟(Microsoft)在2024年發(fā)表一篇論文,稱OpenAI的GPT-4人工智能模型已顯現(xiàn)出“通用人工智能的火花”。與此同時(shí),埃隆·馬斯克(Elon Musk)于2023年3月創(chuàng)立人工智能公司xAI,致力于打造通用人工智能,并表示這一目標(biāo)最快或于2025或2026年達(dá)成。諾貝爾獎(jiǎng)得主、谷歌DeepMind聯(lián)合創(chuàng)始人戴密斯·哈薩比斯(Demis Hassabis)對(duì)記者稱,世界“正站在通用人工智能的臨界點(diǎn)邊緣”。Meta首席執(zhí)行官馬克·扎克伯格(Mark Zuckerberg)表示,公司正致力于“構(gòu)建全面的通用智能”,以驅(qū)動(dòng)下一代產(chǎn)品與服務(wù)。Anthropic聯(lián)合創(chuàng)始人兼首席執(zhí)行官達(dá)里奧·阿莫迪(Dario Amodei)雖對(duì)“通用人工智能”這一術(shù)語心存抵觸,但也認(rèn)為“強(qiáng)大的人工智能”或?qū)⒂?027年實(shí)現(xiàn),并引領(lǐng)健康與繁榮的新紀(jì)元——前提是它不會(huì)最終毀滅人類。前谷歌首席執(zhí)行官、現(xiàn)知名科技投資人埃里克·施密特(Eric Schmidt)則在4月的一場(chǎng)演講中表示,人類將在“未來三到五年內(nèi)”擁有通用人工智能。
如今,通用人工智能的熱度正在消退——硅谷整體風(fēng)向已徹底轉(zhuǎn)向?qū)嵱弥髁x,摒棄了對(duì)烏托邦式愿景的追逐。例如,今年夏天薩姆·奧爾特曼在接受美國(guó)消費(fèi)者新聞與商業(yè)頻道(CNBC)采訪時(shí)稱,通用人工智能“并非特別實(shí)用的術(shù)語”;而此前在4月還大肆宣揚(yáng)通用人工智能的施密特,近日在《紐約時(shí)報(bào)》撰文呼吁硅谷別再癡迷于“超人類智能”,并警告稱這種癡迷會(huì)分散行業(yè)對(duì)實(shí)用技術(shù)的注意力。人工智能領(lǐng)域先驅(qū)吳恩達(dá)(Andrew Ng)與美國(guó)人工智能沙皇戴維·薩克斯(David Sacks)均認(rèn)為,通用人工智能“被過度炒作”。
通用人工智能:定義模糊,過度炒作
發(fā)生了什么?首先,我們需要了解一些背景情況。所有人都認(rèn)同AGI是“Artificial General Intelligence” (通用人工智能)的縮寫,但這幾乎是唯一的共識(shí)。人們對(duì)該術(shù)語的定義略有不同,但這種差異卻至關(guān)重要。物理學(xué)家馬克·埃夫拉姆·古布魯?shù)拢∕ark Avrum Gubrud)是最早使用這一術(shù)語的學(xué)者之一,他在1997年發(fā)表的一篇研究論文中寫道:“我所說的先進(jìn)通用人工智能,是指那些在復(fù)雜性和速度上可與人類大腦相匹敵或超越人類大腦的人工智能系統(tǒng),能夠獲取和處理通用知識(shí)并進(jìn)行推理,且本質(zhì)上可應(yīng)用于任何原本需要人類智能介入的工業(yè)或軍事操作環(huán)節(jié)。”
此后,這一術(shù)語被人工智能研究員謝恩·萊格(Shane Legg)——他后來與哈薩比斯共同創(chuàng)立了谷歌DeepMind——以及計(jì)算機(jī)科學(xué)家本·戈澤爾(Ben Goertzel)、彼得·沃斯(Peter Voss)在21世紀(jì)初采納并推廣。據(jù)沃斯所述,他們將通用人工智能定義為能夠?qū)W會(huì)“可靠執(zhí)行任何具備相應(yīng)能力的人類所能完成的認(rèn)知任務(wù)”的人工智能系統(tǒng)。但這一定義存在漏洞:例如,誰來界定“具備相應(yīng)能力的人類”的標(biāo)準(zhǔn)?此后,其他人工智能研究員提出了不同定義,認(rèn)為通用人工智能應(yīng)是能在所有任務(wù)上都達(dá)到人類專家水平的人工智能,而非僅等同于“具備基礎(chǔ)能力”的人類。OpenAI于2015年末成立,其明確使命是開發(fā)通用人工智能以“造福全人類”,并在通用人工智能定義的爭(zhēng)論中給出了自身的解讀。該公司章程指出,通用人工智能是一種自主系統(tǒng),能夠“在大多數(shù)具有經(jīng)濟(jì)價(jià)值的工作中超越人類表現(xiàn)”。
然而,無論通用人工智能的定義如何,當(dāng)下關(guān)鍵的變化在于:行業(yè)內(nèi)已不再熱衷于談?wù)撨@一概念。究其原因,一方面是,人們愈發(fā)擔(dān)憂人工智能的發(fā)展進(jìn)度或許并未如業(yè)內(nèi)人士數(shù)月前宣稱的那般“飛速推進(jìn)”;另一方面,越來越多的跡象表明,此前對(duì)通用人工智能的討論引發(fā)了過高的期待,而技術(shù)本身根本無法兌現(xiàn)。
通用人工智能熱度驟降的最大因素之一,似乎是OpenAI在8月初推出的GPT-5模型。在微軟宣稱GPT-4顯現(xiàn)出“通用人工智能的火花”兩年多之后,這款新模型的亮相卻令人失望:僅在路由架構(gòu)基礎(chǔ)上實(shí)現(xiàn)了漸進(jìn)式改進(jìn),并未帶來眾人期待的突破性進(jìn)展。參與提出通用人工智能術(shù)語的戈澤爾也提醒公眾,盡管GPT-5表現(xiàn)亮眼,但與真正的通用人工智能仍相去甚遠(yuǎn)——缺乏真正的理解能力、持續(xù)學(xué)習(xí)能力或基于實(shí)際經(jīng)驗(yàn)的能力。
考慮到薩姆·奧爾特曼此前的立場(chǎng),他如今不再使用通用人工智能相關(guān)表述尤為引人關(guān)注。OpenAI建立在通用人工智能炒作的基礎(chǔ)上:通用人工智能是該公司的創(chuàng)始使命,助其籌集數(shù)十億美元資金,也是其與微軟開展合作的基石。雙方協(xié)議中甚至包含一項(xiàng)條款:倘若OpenAI的非營(yíng)利性董事會(huì)宣布已實(shí)現(xiàn)通用人工智能,微軟獲取未來技術(shù)的權(quán)限將受到限制。據(jù)悉,在投入超130億美元后,微軟正力促刪除該條款,甚至曾考慮退出合作。《連線》雜志還報(bào)道稱,OpenAI內(nèi)部就“發(fā)表一篇關(guān)于人工智能進(jìn)度衡量的論文是否會(huì)影響公司宣布實(shí)現(xiàn)通用人工智能的能力”展開過爭(zhēng)論。
“極具積極意義”的風(fēng)向轉(zhuǎn)變
無論觀察人士認(rèn)為這一風(fēng)向轉(zhuǎn)變是營(yíng)銷手段,還是市場(chǎng)反應(yīng),許多人,尤其是企業(yè)界人士,都認(rèn)為這是件好事。Futurum Equities首席市場(chǎng)策略師沙伊·博洛爾(Shay Boloor)稱這一轉(zhuǎn)變“極具積極意義”,并指出市場(chǎng)嘉獎(jiǎng)的是執(zhí)行力,而非“未來某天實(shí)現(xiàn)超級(jí)智能”這類模糊的敘事。
其他人則強(qiáng)調(diào),真正的轉(zhuǎn)變是摒棄“單一通用人工智能幻想”,轉(zhuǎn)向特定領(lǐng)域的“超級(jí)智能”。代理式人工智能公司Landbase首席執(zhí)行官丹尼爾·薩克斯(Daniel Saks)認(rèn)為,“圍繞通用人工智能的炒作周期始終建立在‘單一、集中式人工智能將無所不知’的理念上”,但他表示這并非他所看到的情況。“未來的趨勢(shì)是去中心化、專注于特定領(lǐng)域的模型——它們能在特定領(lǐng)域?qū)崿F(xiàn)超越人類的表現(xiàn),”他向《財(cái)富》雜志表示。
數(shù)字健康平臺(tái)Lirio的首席人工智能科學(xué)家克里斯托弗·西蒙斯(Christopher Symons)則認(rèn)為,通用人工智能這一術(shù)語本就缺乏實(shí)際價(jià)值:他解釋道,那些宣揚(yáng)通用人工智能的人“將資源從更具體的應(yīng)用領(lǐng)域抽離,而在這些領(lǐng)域,人工智能的進(jìn)步能最直接地為社會(huì)創(chuàng)造福祉”。
不過,通用人工智能相關(guān)言論的退潮,并不意味著這一使命(或術(shù)語本身)已徹底消失。Anthropic與DeepMind的高管仍自稱“通用人工智能信念者”——這是行話。即便如此,該表述也存在爭(zhēng)議:對(duì)部分人而言,它指“相信通用人工智能即將到來”;對(duì)另一些人而言,它僅表示“相信人工智能模型將持續(xù)改進(jìn)”。但毋庸置疑的是,如今行業(yè)內(nèi)更多的是“謹(jǐn)慎措辭與淡化處理”,而非“在通用人工智能宣傳上加倍發(fā)力”。
仍有部分人呼吁警惕迫在眉睫的風(fēng)險(xiǎn)
對(duì)部分人而言,這種“謹(jǐn)慎措辭”恰恰凸顯出風(fēng)險(xiǎn)的緊迫性。前OpenAI研究員史蒂文·阿德勒(Steven Adler)向《財(cái)富》雜志表示:“我們不應(yīng)忽視,部分人工智能公司正明確以‘構(gòu)建超越人類智能的系統(tǒng)’為目標(biāo)。人工智能目前尚未達(dá)到這一水平,但無論你給這種系統(tǒng)冠以何種名稱,它都具有危險(xiǎn)性,需要我們以極其嚴(yán)肅的態(tài)度對(duì)待。”
其他人指責(zé)人工智能領(lǐng)域的領(lǐng)導(dǎo)者對(duì)通用人工智能的態(tài)度轉(zhuǎn)變實(shí)則是在“混淆視聽”,目的是規(guī)避監(jiān)管。未來生命研究所(Future of Life Institute)所長(zhǎng)麥克斯·泰格馬克(Max Tegmark)表示,奧爾特曼稱通用人工智能是“無用術(shù)語”并非出于科學(xué)層面的謙遜,而是該公司為持續(xù)打造更強(qiáng)大模型而試圖規(guī)避監(jiān)管的一種手段。
“對(duì)他們而言,僅在私下與投資者談?wù)撏ㄓ萌斯ぶ悄苁歉髦堑倪x擇,”他向《財(cái)富》雜志補(bǔ)充道,“這就像可卡因販子聲稱‘不清楚可卡因是否真能算作毒品’一樣”——因?yàn)樗珡?fù)雜、太難界定了。
無論是將其稱作通用人工智能還是賦予其他稱謂——炒作或許會(huì)消退,風(fēng)向或許會(huì)轉(zhuǎn)變,但其牽涉甚廣:從資金、就業(yè)到安全保障,關(guān)于這場(chǎng)競(jìng)賽將走向何方的核心問題,才剛剛浮出水面。(財(cái)富中文網(wǎng))
譯者:中慧言-王芳
Once upon a time—meaning, um, as recently as earlier this year—Silicon Valley couldn’t stop talking about AGI.
OpenAI CEO Sam Altman wrote in January: “We are now confident we know how to build AGI.” This is after he told a Y Combinator vodcast in late 2024 that AGI might be achieved in 2025 and tweeted in 2024 that OpenAI had “AGI achieved internally.” OpenAI was so AGI-entranced that its head of sales dubbed her team “AGI Sherpas” and its former chief scientist Ilya Sutskever led his fellow researchers in campfire chants of “Feel the AGI!”
OpenAI’s partner and major financial backer Microsoft put out a paper in 2024 claiming OpenAI’s GPT-4 AI model exhibited “sparks of AGI.” Meanwhile, Elon Musk founded xAI in March 2023 with a mission to build AGI, a development he said might occur as soon as 2025 or 2026. Demis Hassabis, the Nobel-laureate cofounder of Google DeepMind, told reporters that the world was “on the cusp” of AGI. Meta CEO Mark Zuckerberg said his company was committed to “building full general intelligence” to power the next generation of its products and services. Dario Amodei, cofounder and CEO of Anthropic, while noting he disliked the term “AGI,” said “powerful AI” could arrive by 2027 and usher in a new age of health and abundance—if it didn’t wind up killing us all. Eric Schmidt, the former Google CEO turned prominent tech investor, said in a talk in April that we would have AGI “within three to five years.”
Now the AGI fever is breaking—in what amounts to a wholesale vibe shift toward pragmatism as opposed to chasing utopian visions. For example, at a CNBC appearance this summer, Altman called AGI “not a super-useful term.” In the New York Times, Schmidt—yes that same guy who was talking up AGI in April—urged Silicon Valley to stop fixating on superhuman AI, warning that the obsession distracts from building useful technology. Both AI pioneer Andrew Ng and U.S. AI czar David Sacks called AGI “overhyped.”
AGI: Under-defined and overhyped
What happened? Well, first, a little background. Everyone agrees that AGI stands for “artificial general intelligence.” And that’s pretty much the only thing everyone agrees upon. People define the term in subtly, but importantly, different ways. Among the first to use the term was physicist Mark Avrum Gubrud who in a 1997 research article wrote that “by advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate, and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.”
The term was later picked up and popularized by AI researcher Shane Legg—who would go on to cofound Google DeepMind with Hassabis—and fellow computer scientists Ben Goertzel and Peter Voss in the early 2000s. They defined AGI, according to Voss, as an AI system that could learn to “reliably perform any cognitive task that a competent human can.” That definition had some problems—for instance, who decides who qualifies as a competent human? And, since then, other AI researchers have developed different definitions that see AGI as AI that is as capable as any human expert at all tasks, as opposed to merely a “competent” person. OpenAI was founded in late 2015 with the explicit mission of developing AGI “for the benefit of all,” and it added its own twist to the AGI definition debate. The company’s charter says AGI is an autonomous system that can “outperform humans at most economically valuable work.”
But whatever AGI is, the important thing these days, it seems, is not to talk about it. And the reason why has to do with growing concerns that progress in AI development may not be galloping ahead as fast as industry insiders touted just a few months ago—and growing indications that all the AGI talk was stoking inflated expectations that the tech itself couldn’t live up to.
Among the biggest factors in AGI’s sudden fall from grace, seems to have been the rollout of OpenAI’s GPT-5 model in early August. Just over two years after Microsoft’s claim that GPT-4 showed “sparks” of AGI, the new model landed with a thud: incremental improvements wrapped in a routing architecture, not the breakthrough many expected. Goertzel, who helped coin the term AGI, reminded the public that while GPT-5 is impressive, it remains nowhere near true AGI—lacking real understanding, continuous learning, or grounded experience.
Altman’s retreat from AGI language is especially striking given his prior position. OpenAI was built on AGI hype: AGI is in the company’s founding mission, it helped raise billions in capital, and it underpins the partnership with Microsoft. A clause in their agreement even states that if OpenAI’s nonprofit board declares it has achieved AGI, Microsoft’s access to future technology would be restricted. Microsoft—after investing more than $13 billion—is reportedly pushing to remove that clause, and has even considered walking away from the deal. Wired also reported on an internal OpenAI debate over whether publishing a paper on measuring AI progress could complicate the company’s ability to declare it had achieved AGI.
A ‘very healthy’ vibe shift
But whether observers think the vibe shift is a marketing move or a market response, many, particularly on the corporate side, say it is a good thing. Shay Boloor, chief market strategist at Futurum Equities, called the move “very healthy,” noting that markets reward execution, not vague “someday superintelligence” narratives.
Others stress that the real shift is away from a monolithic AGI fantasy, toward domain-specific “superintelligences.” Daniel Saks, CEO of agentic AI company Landbase, argued that “the hype cycle around AGI has always rested on the idea of a single, centralized AI that becomes all-knowing,” but said that is not what he sees happening. “The future lies in decentralized, domain-specific models that achieve superhuman performance in particular fields,” he told Fortune.
Christopher Symons, chief AI scientist at digital health platform Lirio, said that the term AGI was never useful: Those promoting AGI, he explained, “draw resources away from more concrete applications where AI advancements can most immediately benefit society.”
Still, the retreat from AGI rhetoric doesn’t mean the mission—or the phrase—has vanished. Anthropic and DeepMind executives continue to call themselves “AGI-pilled,” which is a bit of insider slang. Even that phrase is disputed, though; for some it refers to the belief that AGI is imminent, while others say it’s simply the belief that AI models will continue to improve. But there is no doubt that there is more hedging and downplaying than doubling down.
Some still call out urgent risks
And for some, that hedging is exactly what makes the risks more urgent. Former OpenAI researcher Steven Adler told Fortune: “We shouldn’t lose sight that some AI companies are explicitly aiming to build systems smarter than any human. AI isn’t there yet, but whatever you call this, it’s dangerous and demands real seriousness.”
Others accuse AI leaders of changing their tune on AGI to muddy the waters in a bid to avoid regulation. Max Tegmark, president of the Future of Life Institute, says Altman calling AGI “not a useful term” isn’t scientific humility, but a way for the company to steer clear of regulation while continuing to build toward more and more powerful models.
“It’s smarter for them to just talk about AGI in private with their investors,” he told Fortune, adding that “it’s like a cocaine salesman saying that it’s unclear whether cocaine is really a drug,” because it’s just so complex and difficult to decipher.
Call it AGI or call it something else—the hype may fade and the vibe may shift, but with so much on the line, from money and jobs to security and safety, the real questions about where this race leads are only just beginning.