
? 谷歌X前首席商務(wù)官莫·喬達特表示,“人工智能將創(chuàng)造就業(yè)崗位”的說法純屬“徹頭徹尾的廢話”,并警告稱“能力欠佳的首席執(zhí)行官”也難以幸免。這位科技專家預(yù)測,通用人工智能(AGI)將在各個維度超越大多數(shù)人類——這一觀點與谷歌DeepMind首席執(zhí)行官戴密斯·哈薩比斯(Demis Hassabis)和OpenAI首席執(zhí)行官薩姆·奧爾特曼(Sam Altman)等人的觀點不謀而合。只有各領(lǐng)域最為頂尖的員工能“暫時”保住工作,就連“心懷不軌”的政府領(lǐng)導(dǎo)人也可能被機器人取代。
科技巨頭們一直堅稱,人工智能將引領(lǐng)人類進入“黃金時代”,屆時所有疾病都能被治愈,人們盡享富足生活,勞動者將擁有“超級人類”能力。然而,前谷歌高管卻對“人工智能不會成為就業(yè)殺手,反而會為人類創(chuàng)造新工作”的觀點展開了猛烈抨擊。
“在我看來,這種說法簡直就是徹頭徹尾的廢話,”谷歌X前首席商務(wù)官莫·喬達特最近在《首席執(zhí)行官日記》播客中表示,“無論身處何種崗位,頂尖人才將會留下來。真正了解架構(gòu)、掌握技術(shù)的頂尖軟件開發(fā)者會留下來——至少在一段時期內(nèi)是這樣。”
喬達特加入了發(fā)出警告的領(lǐng)導(dǎo)者行列,他們認為未來5到15年內(nèi),人工智能將掀起一場就業(yè)末日危機。多鄰國(Duolingo)、Workday和Klarna等公司已大規(guī)模裁員,或完全停止招聘人類員工,為構(gòu)建以人工智能為核心的勞動力隊伍做準備。
而這位在科技行業(yè)深耕30年、如今撰寫人工智能發(fā)展相關(guān)書籍的喬達特提醒道,高管們切莫過早為效率提升而慶祝——他們的職位同樣岌岌可危。
“首席執(zhí)行官們正慶祝自己如今能夠裁員,借助人工智能完成工作,實現(xiàn)生產(chǎn)力提升和成本削減。但他們卻忽略了一個關(guān)鍵事實——人工智能同樣會將他們?nèi)《眴踢_特接著說,“通用人工智能在各個維度都將遠超人類,包括擔任首席執(zhí)行官。你完全可以想象,終有一日,大多數(shù)能力欠佳的首席執(zhí)行官都將被人工智能所替代。”
盡管“公司完全由機器人運營、沒有人類參與”的場景極具反烏托邦色彩,但這位前谷歌高管并不畏懼未來。這位58歲的高管認為,人工智能并非導(dǎo)致失業(yè)的罪魁禍首——實際上,是那些貪婪的首席執(zhí)行官們?yōu)樽非蠼?jīng)濟利益而任由技術(shù)接管工作,這才是問題的根源所在。
“人工智能本身并無過錯——問題在于,在機器崛起時代,人類的價值觀出現(xiàn)了嚴重偏差,”喬達特說道,“如今人類最主要的價值觀是資本主義。資本主義本質(zhì)上是什么?是勞動力套利。”
《財富》雜志已聯(lián)系喬達特尋求置評。
若要推動人類蓬勃發(fā)展,“心懷不軌”的世界領(lǐng)導(dǎo)人需被人工智能取代
在某些能力方面,人工智能已然超越人類——它能夠編程、處理客戶需求、完成行政事務(wù),甚至分析市場數(shù)據(jù),而其未來能力上限更是無從估量。
谷歌DeepMind首席執(zhí)行官戴密斯·哈薩比斯和OpenAI首席執(zhí)行官薩姆·奧爾特曼等科技界領(lǐng)軍人物堅信,到2030年,人工智能將超越能力最強的人類。這對人類而言或許是件好事:喬達特建議,要讓人類在這個新時代蓬勃發(fā)展,那些品行不端的企業(yè)高管和世界領(lǐng)導(dǎo)人皆需被人工智能取代。
他指出,由于心懷不軌的領(lǐng)導(dǎo)人會利用這項技術(shù)“放大人類所能犯下的惡行”,因此技術(shù)將催生更具道德操守的世界領(lǐng)導(dǎo)者——而這種由人工智能賦能的政治家構(gòu)成的反烏托邦場景是“無可避免的”。
“若想走向更美好的未來,唯一的途徑便是讓身居高位的不良之徒被人工智能取代,”喬達特在播客中繼續(xù)說道,“(世界領(lǐng)導(dǎo)人)將不得不讓人工智能取代自己。否則,他們會失去優(yōu)勢。”
喬達特并非唯一一位對人工智能將給人類未來帶來的影響發(fā)出警告的人。奧爾特曼和谷歌首席執(zhí)行官桑達爾·皮查伊(Sundar Pichai)表示,人工智能亟待監(jiān)管——無論是“主要國家政府”劃定紅線,還是成立高級別治理機構(gòu)來監(jiān)督其潛在危害。
“我們最終可能需要類似國際原子能機構(gòu)(IAEA)的機構(gòu)來監(jiān)管超級智能研發(fā)工作,”奧爾特曼在2023年的一篇博客文章中寫道,并補充稱,人工智能項目應(yīng)接受“國際權(quán)威機構(gòu)”的監(jiān)管,該機構(gòu)可以檢查系統(tǒng)、要求開展審計、測試安全標準合規(guī)性、對部署規(guī)模和安全級別施加限制。(財富中文網(wǎng))
譯者:中慧言-王芳
? 谷歌X前首席商務(wù)官莫·喬達特表示,“人工智能將創(chuàng)造就業(yè)崗位”的說法純屬“徹頭徹尾的廢話”,并警告稱“能力欠佳的首席執(zhí)行官”也難以幸免。這位科技專家預(yù)測,通用人工智能(AGI)將在各個維度超越大多數(shù)人類——這一觀點與谷歌DeepMind首席執(zhí)行官戴密斯·哈薩比斯(Demis Hassabis)和OpenAI首席執(zhí)行官薩姆·奧爾特曼(Sam Altman)等人的觀點不謀而合。只有各領(lǐng)域最為頂尖的員工能“暫時”保住工作,就連“心懷不軌”的政府領(lǐng)導(dǎo)人也可能被機器人取代。
科技巨頭們一直堅稱,人工智能將引領(lǐng)人類進入“黃金時代”,屆時所有疾病都能被治愈,人們盡享富足生活,勞動者將擁有“超級人類”能力。然而,前谷歌高管卻對“人工智能不會成為就業(yè)殺手,反而會為人類創(chuàng)造新工作”的觀點展開了猛烈抨擊。
“在我看來,這種說法簡直就是徹頭徹尾的廢話,”谷歌X前首席商務(wù)官莫·喬達特最近在《首席執(zhí)行官日記》播客中表示,“無論身處何種崗位,頂尖人才將會留下來。真正了解架構(gòu)、掌握技術(shù)的頂尖軟件開發(fā)者會留下來——至少在一段時期內(nèi)是這樣。”
喬達特加入了發(fā)出警告的領(lǐng)導(dǎo)者行列,他們認為未來5到15年內(nèi),人工智能將掀起一場就業(yè)末日危機。多鄰國(Duolingo)、Workday和Klarna等公司已大規(guī)模裁員,或完全停止招聘人類員工,為構(gòu)建以人工智能為核心的勞動力隊伍做準備。
而這位在科技行業(yè)深耕30年、如今撰寫人工智能發(fā)展相關(guān)書籍的喬達特提醒道,高管們切莫過早為效率提升而慶祝——他們的職位同樣岌岌可危。
“首席執(zhí)行官們正慶祝自己如今能夠裁員,借助人工智能完成工作,實現(xiàn)生產(chǎn)力提升和成本削減。但他們卻忽略了一個關(guān)鍵事實——人工智能同樣會將他們?nèi)《眴踢_特接著說,“通用人工智能在各個維度都將遠超人類,包括擔任首席執(zhí)行官。你完全可以想象,終有一日,大多數(shù)能力欠佳的首席執(zhí)行官都將被人工智能所替代。”
盡管“公司完全由機器人運營、沒有人類參與”的場景極具反烏托邦色彩,但這位前谷歌高管并不畏懼未來。這位58歲的高管認為,人工智能并非導(dǎo)致失業(yè)的罪魁禍首——實際上,是那些貪婪的首席執(zhí)行官們?yōu)樽非蠼?jīng)濟利益而任由技術(shù)接管工作,這才是問題的根源所在。
“人工智能本身并無過錯——問題在于,在機器崛起時代,人類的價值觀出現(xiàn)了嚴重偏差,”喬達特說道,“如今人類最主要的價值觀是資本主義。資本主義本質(zhì)上是什么?是勞動力套利。”
《財富》雜志已聯(lián)系喬達特尋求置評。
若要推動人類蓬勃發(fā)展,“心懷不軌”的世界領(lǐng)導(dǎo)人需被人工智能取代
在某些能力方面,人工智能已然超越人類——它能夠編程、處理客戶需求、完成行政事務(wù),甚至分析市場數(shù)據(jù),而其未來能力上限更是無從估量。
谷歌DeepMind首席執(zhí)行官戴密斯·哈薩比斯和OpenAI首席執(zhí)行官薩姆·奧爾特曼等科技界領(lǐng)軍人物堅信,到2030年,人工智能將超越能力最強的人類。這對人類而言或許是件好事:喬達特建議,要讓人類在這個新時代蓬勃發(fā)展,那些品行不端的企業(yè)高管和世界領(lǐng)導(dǎo)人皆需被人工智能取代。
他指出,由于心懷不軌的領(lǐng)導(dǎo)人會利用這項技術(shù)“放大人類所能犯下的惡行”,因此技術(shù)將催生更具道德操守的世界領(lǐng)導(dǎo)者——而這種由人工智能賦能的政治家構(gòu)成的反烏托邦場景是“無可避免的”。
“若想走向更美好的未來,唯一的途徑便是讓身居高位的不良之徒被人工智能取代,”喬達特在播客中繼續(xù)說道,“(世界領(lǐng)導(dǎo)人)將不得不讓人工智能取代自己。否則,他們會失去優(yōu)勢。”
喬達特并非唯一一位對人工智能將給人類未來帶來的影響發(fā)出警告的人。奧爾特曼和谷歌首席執(zhí)行官桑達爾·皮查伊(Sundar Pichai)表示,人工智能亟待監(jiān)管——無論是“主要國家政府”劃定紅線,還是成立高級別治理機構(gòu)來監(jiān)督其潛在危害。
“我們最終可能需要類似國際原子能機構(gòu)(IAEA)的機構(gòu)來監(jiān)管超級智能研發(fā)工作,”奧爾特曼在2023年的一篇博客文章中寫道,并補充稱,人工智能項目應(yīng)接受“國際權(quán)威機構(gòu)”的監(jiān)管,該機構(gòu)可以檢查系統(tǒng)、要求開展審計、測試安全標準合規(guī)性、對部署規(guī)模和安全級別施加限制。(財富中文網(wǎng))
譯者:中慧言-王芳
? Google X’s former chief business officer Mo Gawdat says the notion AI will create jobs is “100% crap,” and even warns that “incompetent CEOs” are on the chopping block. The tech guru predicts that AGI will be better at everything than most humans—echoing the likes of Google DeepMind CEO Demis Hassabis and OpenAI chief Sam Altman. Only the best workers in their fields will keep their jobs “for a while,” and even “evil” government leaders might be replaced by the robots.
Tech titans keep insisting that AI will usher in a “golden era” of humanity, where all illness is cured, people live in abundance, and workers have “superhuman” powers. But a former Google executive has slammed the notion that the technology won’t be a job-killer and will actually create new work for humans.
“My belief is it is 100% crap,” Mo Gawdat, the former chief business officer for Google X, recently said on The Diary of a CEO podcast. “The best at any job will remain. The best software developer, the one that really knows architecture, knows technology, and so on will stay—for a while.”
Gawdat has joined the cohort of leaders waving the red flag that AI will commence a jobs armageddon within the next 5 to 15 years. Companies including Duolingo, Workday, and Klarna have already laid off staffers in droves or stopped hiring humans altogether to get ready for an AI-centric workforce.
But executives shouldn’t celebrate their efficiency gains too soon—their role is also on the chopping block, Gawdat, who worked in tech for 30 years and now writes books on AI development, cautioned.
“CEOs are celebrating that they can now get rid of people and have productivity gains and cost reductions because AI can do that job. The one thing they don’t think of is AI will replace them too,” Gawdat continued. “AGI is going to be better at everything than humans, including being a CEO. You really have to imagine that there will be a time where most incompetent CEOs will be replaced.”
While the vision of human-less companies solely run by robots is incredibly dystopian, the ex-Google executive isn’t afraid of what lies ahead. The 58-year-old doesn’t see AI being the perpetrator of job loss—money-hungry CEOs are actually to blame for letting the technology take over in the pursuit of financial gain, he claimed.
“There’s absolutely nothing wrong with AI—there’s a lot wrong with the value set of humanity at the age of the rise of the machines,” Gawdat said. “And the biggest value set of humanity is capitalism today. And capitalism is all about what? Labor arbitrage.”
Fortune reached out to Gawdat for comment.
For humans to thrive, ‘evil’ world leaders need to be replaced by AI
AI is already outpacing humans when it comes to some abilities—it can code, resolve customer requests, handle administrative work, and even analyze market figures. There’s no telling where its future capabilities lie.
Tech leaders like Google DeepMind CEO Demis Hassabis and OpenAI chief Sam Altman are adamant it’ll outpace even the most powerful people by 2030. And that may be a good thing for humanity: For humans to thrive in this new era, immoral corporate executives and world leaders alike need to be replaced by AI, Gawdat advised.
He said that since harmful leaders will use the tech to “magnify the evil that man can do,” technology will make for more moral world leaders—and that this dystopian scenario of AI-enabled politicians is “unavoidable”.
“The only way for us to get to a better place, is for the evil people at the top to be replaced with AI,” Gawdat continued on the podcast. “[World leaders] will have to replace themselves [with] AI. Otherwise, they lose their advantage.”
Gawdat isn’t the only one sounding alarm bells over AI’s impact on humanity’s future. Altman and Google chief Sundar Pichai have both expressed a need for AI regulation—whether that be “major governments” drawing a line in the sand, or creating a high-level governance body to oversee potential harm.
“We are likely to eventually need something like an IAEA for superintelligence efforts,” Altman wrote in a 2023 blogpost, adding that AI projects should have to confront an “international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.”