
“火藥戰(zhàn)爭”的起源可以追溯至15世紀(jì),彼時機(jī)械點(diǎn)火裝置的火繩槍剛剛問世。如今,無人機(jī)群已經(jīng)能夠肆無忌憚地跨境發(fā)動攻擊。1685年,意大利物理學(xué)家喬瓦尼·博雷利曾經(jīng)預(yù)見,滑輪驅(qū)動的機(jī)器可以模仿動物行為。而今,埃隆·馬斯克談?wù)摰臋C(jī)器人已經(jīng)智能到代人購物,甚至取代外科醫(yī)生。
技術(shù)發(fā)展既瞬息萬變,又深植于歷史??梢韵瘛端蚕⑷钪妗罚‥verything, Everywhere All at Once)一般狂飆突進(jìn),也能夠像《流人》(Slow Horses)一樣暗流涌動。巴塞羅那設(shè)計博物館(Design Museum in Barcelona)展出的一幅24米長壁畫《計算帝國》(Calculating Empires)中,就可以看出這種快與慢的對比。壁畫直觀呈現(xiàn)了從印刷術(shù)到深度偽造,從古代秘魯?shù)慕Y(jié)繩計算器到“行星級”數(shù)據(jù)系統(tǒng)的演進(jìn)歷程。
今年3月,凱特·克勞福德在西班牙巴塞羅那舉辦的世界移動通信大會(Mobile World Congress)上表示:“我覺得很有意義的一點(diǎn)是,當(dāng)人們走近觀看壁畫,就能夠體會到從歷史的角度看待當(dāng)下的感覺?!笨藙诟5率悄霞又荽髮W(xué)(University of Southern California)的人工智能研究教授,也是這幅耗時四年的壁畫聯(lián)合創(chuàng)作者。她與視覺藝術(shù)家弗拉登·喬勒合作,希望刺激人們思考:在根本性技術(shù)變革面前,究竟是誰在制定規(guī)則?誰決定何為重要?
“人們感覺生活在‘技術(shù)現(xiàn)世主義’中,身邊變化快得瘋狂。”克勞福德說,“因此,退后一步自問‘過去500年我們學(xué)到了什么’至關(guān)重要。在我看來,這幅壁畫是個顛覆性的項(xiàng)目,因?yàn)轱@而易見,歷史不僅關(guān)乎技術(shù)創(chuàng)新,更關(guān)乎誰有權(quán)制定社會中的規(guī)則。”
“這正是現(xiàn)在人工智能體如此重要的原因,因?yàn)檫@一領(lǐng)域發(fā)展飛速。標(biāo)準(zhǔn)尚未確立,而正是在座各位,在世界移動通信大會這樣的場合,人們才可以展開各種討論,我們希望制定怎樣的標(biāo)準(zhǔn)?如何落實(shí)在系統(tǒng)中,以及如何保護(hù)自己和客戶?”
“因?yàn)楝F(xiàn)在是關(guān)鍵時刻,我們要確保這項(xiàng)技術(shù)具有極高的實(shí)用性和助益,不會開啟漏洞、制造攻擊路徑,提供新攻擊面,在認(rèn)知層面也可能帶來極大危險?!?/p>
世界移動通信大會堪稱盛事。超過10萬名參會者匆匆穿梭八個巨大的展廳,每個展廳都擠滿面向未來的技術(shù)。華為、谷歌(Google)、榮耀和高通(Qualcomm)贊助的巨大展館,展示著令人矚目的新產(chǎn)品,汽車與手機(jī)連接,機(jī)器人與殘障人士連接,眼鏡與互聯(lián)網(wǎng)連接。渴望影響力和投資的各國政府,與希望在人工智能革命中大獲成功的企業(yè)爭著亮相。
世界移動通信大會也是辯論之地。在大型舞臺上,科技界頂尖人物的對話往往被閃爍霓虹燈和交互式等離子屏幕淹沒。“快速行動,打破常規(guī)。”馬克·扎克伯格曾經(jīng)在2012年說道。但現(xiàn)在,風(fēng)險實(shí)在太高了。
我們正進(jìn)行關(guān)于智能本質(zhì)的現(xiàn)場討論。DeepMind的創(chuàng)始人德米斯·哈薩比斯曾經(jīng)表示,通用人工智能可能五年內(nèi)就會出現(xiàn)。在那個世界里,誰(或什么)將做出決策?是人機(jī)協(xié)同?還是人類主導(dǎo)?或者根本不需要人類?谷歌前首席商務(wù)官莫·加瓦達(dá)特曾經(jīng)談及“短期反烏托邦”風(fēng)險,因?yàn)檎⒚耖g社會和監(jiān)管機(jī)構(gòu)都在努力控制能學(xué)習(xí)和決策的機(jī)器帶來的影響。
“我們討論的‘智能’究竟是什么意思?”克勞福德問道,“‘智能’一詞的歷史本身就充滿爭議。這個詞曾經(jīng)被用來劃分人群,推動關(guān)于誰有價值誰無價值的計劃?!?/p>
“我們試圖將智能體與人類智能相比較,實(shí)際上二者完全不同。這種智能是大規(guī)模統(tǒng)計概率,是在復(fù)雜環(huán)境中執(zhí)行任務(wù)的系統(tǒng),與人類差異巨大,但也意味著我們要提出一系列不同的問題:智能體在做什么?如何追蹤其行為?我們?nèi)绾胃玫乩斫庵悄荏w改變工作流程的方式,以及更重要的是,如何改變生活方式?”
關(guān)于OpenAI、Anthropic與美國國防部(Department for War)之間緊張關(guān)系的討論持續(xù)發(fā)酵,克勞福德提出疑問,智能體的使用紅線如何劃定?“想象一下戰(zhàn)場上的智能體?!彼f。我們無需想象。據(jù)報道,伊朗已經(jīng)出現(xiàn) “以思考速度” 開展的人工智能輔助轟炸。人工智能的功能之一是“決策壓縮”,即縮短從想法到執(zhí)行的時間框架?!皻湣闭诳s短。
“有了規(guī)模和速度,就能夠一邊實(shí)施斬首式打擊,一邊確保對方無法利用空中彈道導(dǎo)彈反擊?!庇~卡斯?fàn)柎髮W(xué)(Newcastle University)的學(xué)者克雷格·瓊斯告訴英國《衛(wèi)報》(The Guardian),“以往在戰(zhàn)爭中,可能需要數(shù)天或數(shù)周?,F(xiàn)在所有行動同步完成?!?/p>
克勞福德提到建立問責(zé)追蹤機(jī)制,即追蹤決策來源的系統(tǒng)。目前,我們正在陷入“責(zé)任洗白”困境,即無人承擔(dān)責(zé)任。在英國公務(wù)員體系,也就是政府執(zhí)行部門中,這被稱為“溜肩綜合征”,人人都想方設(shè)法逃避責(zé)任。
“現(xiàn)在是變相的甩鍋游戲,人們會問:‘是設(shè)計師負(fù)責(zé)嗎?部署方?企業(yè)客戶?還是終端用戶?’每個人都可以說:‘我們還不太清楚。’這是不能接受的?!笨藙诟5卤硎?,“我認(rèn)為,接下來的討論,尤其是與監(jiān)管機(jī)構(gòu)的討論中,要建立起嚴(yán)密的責(zé)任鏈,確保出現(xiàn)問題時可以明確責(zé)任主體?!?/p>
如果2026年世界移動通信大會上討論的內(nèi)容有一半成真,那么智能體很快就會滲入人們生活的方方面面。智能體能讀取并緩存每段未寫完的信息、每張被刪除的圖片、每封留在草稿箱的郵件、智能眼鏡錄制的每條視頻,每次對話??藙诟5戮娣Q,這將“徹底顛覆我們所知的隱私”。
“我們才剛剛開始了解這一切意味著什么?!彼f。所有討論都必須務(wù)實(shí),而且刻不容緩。(財富中文網(wǎng))
譯者:梁宇
“火藥戰(zhàn)爭”的起源可以追溯至15世紀(jì),彼時機(jī)械點(diǎn)火裝置的火繩槍剛剛問世。如今,無人機(jī)群已經(jīng)能夠肆無忌憚地跨境發(fā)動攻擊。1685年,意大利物理學(xué)家喬瓦尼·博雷利曾經(jīng)預(yù)見,滑輪驅(qū)動的機(jī)器可以模仿動物行為。而今,埃隆·馬斯克談?wù)摰臋C(jī)器人已經(jīng)智能到代人購物,甚至取代外科醫(yī)生。
技術(shù)發(fā)展既瞬息萬變,又深植于歷史??梢韵瘛端蚕⑷钪妗罚‥verything, Everywhere All at Once)一般狂飆突進(jìn),也能夠像《流人》(Slow Horses)一樣暗流涌動。巴塞羅那設(shè)計博物館(Design Museum in Barcelona)展出的一幅24米長壁畫《計算帝國》(Calculating Empires)中,就可以看出這種快與慢的對比。壁畫直觀呈現(xiàn)了從印刷術(shù)到深度偽造,從古代秘魯?shù)慕Y(jié)繩計算器到“行星級”數(shù)據(jù)系統(tǒng)的演進(jìn)歷程。
今年3月,凱特·克勞福德在西班牙巴塞羅那舉辦的世界移動通信大會(Mobile World Congress)上表示:“我覺得很有意義的一點(diǎn)是,當(dāng)人們走近觀看壁畫,就能夠體會到從歷史的角度看待當(dāng)下的感覺?!笨藙诟5率悄霞又荽髮W(xué)(University of Southern California)的人工智能研究教授,也是這幅耗時四年的壁畫聯(lián)合創(chuàng)作者。她與視覺藝術(shù)家弗拉登·喬勒合作,希望刺激人們思考:在根本性技術(shù)變革面前,究竟是誰在制定規(guī)則?誰決定何為重要?
“人們感覺生活在‘技術(shù)現(xiàn)世主義’中,身邊變化快得瘋狂?!笨藙诟5抡f,“因此,退后一步自問‘過去500年我們學(xué)到了什么’至關(guān)重要。在我看來,這幅壁畫是個顛覆性的項(xiàng)目,因?yàn)轱@而易見,歷史不僅關(guān)乎技術(shù)創(chuàng)新,更關(guān)乎誰有權(quán)制定社會中的規(guī)則。”
“這正是現(xiàn)在人工智能體如此重要的原因,因?yàn)檫@一領(lǐng)域發(fā)展飛速。標(biāo)準(zhǔn)尚未確立,而正是在座各位,在世界移動通信大會這樣的場合,人們才可以展開各種討論,我們希望制定怎樣的標(biāo)準(zhǔn)?如何落實(shí)在系統(tǒng)中,以及如何保護(hù)自己和客戶?”
“因?yàn)楝F(xiàn)在是關(guān)鍵時刻,我們要確保這項(xiàng)技術(shù)具有極高的實(shí)用性和助益,不會開啟漏洞、制造攻擊路徑,提供新攻擊面,在認(rèn)知層面也可能帶來極大危險。”
世界移動通信大會堪稱盛事。超過10萬名參會者匆匆穿梭八個巨大的展廳,每個展廳都擠滿面向未來的技術(shù)。華為、谷歌(Google)、榮耀和高通(Qualcomm)贊助的巨大展館,展示著令人矚目的新產(chǎn)品,汽車與手機(jī)連接,機(jī)器人與殘障人士連接,眼鏡與互聯(lián)網(wǎng)連接??释绊懥屯顿Y的各國政府,與希望在人工智能革命中大獲成功的企業(yè)爭著亮相。
世界移動通信大會也是辯論之地。在大型舞臺上,科技界頂尖人物的對話往往被閃爍霓虹燈和交互式等離子屏幕淹沒。“快速行動,打破常規(guī)?!瘪R克·扎克伯格曾經(jīng)在2012年說道。但現(xiàn)在,風(fēng)險實(shí)在太高了。
我們正進(jìn)行關(guān)于智能本質(zhì)的現(xiàn)場討論。DeepMind的創(chuàng)始人德米斯·哈薩比斯曾經(jīng)表示,通用人工智能可能五年內(nèi)就會出現(xiàn)。在那個世界里,誰(或什么)將做出決策?是人機(jī)協(xié)同?還是人類主導(dǎo)?或者根本不需要人類?谷歌前首席商務(wù)官莫·加瓦達(dá)特曾經(jīng)談及“短期反烏托邦”風(fēng)險,因?yàn)檎?、民間社會和監(jiān)管機(jī)構(gòu)都在努力控制能學(xué)習(xí)和決策的機(jī)器帶來的影響。
“我們討論的‘智能’究竟是什么意思?”克勞福德問道,“‘智能’一詞的歷史本身就充滿爭議。這個詞曾經(jīng)被用來劃分人群,推動關(guān)于誰有價值誰無價值的計劃?!?/p>
“我們試圖將智能體與人類智能相比較,實(shí)際上二者完全不同。這種智能是大規(guī)模統(tǒng)計概率,是在復(fù)雜環(huán)境中執(zhí)行任務(wù)的系統(tǒng),與人類差異巨大,但也意味著我們要提出一系列不同的問題:智能體在做什么?如何追蹤其行為?我們?nèi)绾胃玫乩斫庵悄荏w改變工作流程的方式,以及更重要的是,如何改變生活方式?”
關(guān)于OpenAI、Anthropic與美國國防部(Department for War)之間緊張關(guān)系的討論持續(xù)發(fā)酵,克勞福德提出疑問,智能體的使用紅線如何劃定?“想象一下戰(zhàn)場上的智能體。”她說。我們無需想象。據(jù)報道,伊朗已經(jīng)出現(xiàn) “以思考速度” 開展的人工智能輔助轟炸。人工智能的功能之一是“決策壓縮”,即縮短從想法到執(zhí)行的時間框架。“殺傷鏈”正在縮短。
“有了規(guī)模和速度,就能夠一邊實(shí)施斬首式打擊,一邊確保對方無法利用空中彈道導(dǎo)彈反擊?!庇~卡斯?fàn)柎髮W(xué)(Newcastle University)的學(xué)者克雷格·瓊斯告訴英國《衛(wèi)報》(The Guardian),“以往在戰(zhàn)爭中,可能需要數(shù)天或數(shù)周?,F(xiàn)在所有行動同步完成。”
克勞福德提到建立問責(zé)追蹤機(jī)制,即追蹤決策來源的系統(tǒng)。目前,我們正在陷入“責(zé)任洗白”困境,即無人承擔(dān)責(zé)任。在英國公務(wù)員體系,也就是政府執(zhí)行部門中,這被稱為“溜肩綜合征”,人人都想方設(shè)法逃避責(zé)任。
“現(xiàn)在是變相的甩鍋游戲,人們會問:‘是設(shè)計師負(fù)責(zé)嗎?部署方?企業(yè)客戶?還是終端用戶?’每個人都可以說:‘我們還不太清楚。’這是不能接受的?!笨藙诟5卤硎?,“我認(rèn)為,接下來的討論,尤其是與監(jiān)管機(jī)構(gòu)的討論中,要建立起嚴(yán)密的責(zé)任鏈,確保出現(xiàn)問題時可以明確責(zé)任主體?!?/p>
如果2026年世界移動通信大會上討論的內(nèi)容有一半成真,那么智能體很快就會滲入人們生活的方方面面。智能體能讀取并緩存每段未寫完的信息、每張被刪除的圖片、每封留在草稿箱的郵件、智能眼鏡錄制的每條視頻,每次對話??藙诟5戮娣Q,這將“徹底顛覆我們所知的隱私”。
“我們才剛剛開始了解這一切意味著什么?!彼f。所有討論都必須務(wù)實(shí),而且刻不容緩。(財富中文網(wǎng))
譯者:梁宇
The birth of ‘gunpowder warfare’ can be traced back to the 15th century and the invention of the matchlock gun, the first mechanical firing device. Now drone swarms attack across borders with impunity. In 1685, Giovanni Borelli, the Italian physicist, foresaw a world where machines driven by pulleys could ape the actions of animals. Elon Musk now talks of robots intelligent enough to do the shopping and take the place of surgeons.
Technological development is both immediate and anchored in history, both Everything, Everywhere All at Once and Slow Horses. The fast/slow contrast is embedded in the artwork, Calculating Empires, a 24-meter-long mural, on display at the Design Museum in Barcelona. It visualizes the journey from the printing press to deep fakes, from quipu, an ancient Peruvian calculator made of knotted ropes, to ‘planetary scale’ data systems.
“What I find really interesting is, when people go into this installation, it helps you put this moment in perspective,” Kate Crawford told the Mobile World Congress in Barcelona in March. Crawford, artificial intelligence research professor at the University of Southern California, is the co-creator of the mural, which took four years to fabricate. With the visual artist, Vladen Joler, the work urges us all to consider who is making the rules and deciding what matters when it comes to fundamental technology shifts.
“People feel like we’re living in this technological presentism and crazy amount of change,” Crawford said. “So, the ability to step back and say, ‘what have we learned over 500 years?’ [matters]. For me, [the mural] was a transformative project, because what was very clear is that history is not just about technical innovation. It’s about who has the power to set the rules that we will be living within.”
“This is why agentic AI is so important right now, because it’s a rapidly evolving field. The standards are not yet set, and it’s going to be people here, in rooms like this, at places like Mobile World Congress, who are going to have these conversations—what do we want those standards to look like, how do we implement them in our systems, and how do we protect ourselves and our clients?”
“Because this is the big moment to actually make sure that this is a technology that is profoundly useful and helpful and not one that opens up vulnerabilities and attack vectors and new attack surfaces and actually could be cognitively really quite dangerous as well.”
Mobile World Congress is a phenomenon. More than 100,000 delegates walk purposefully around eight cavernous halls, each packed with the technology of the future. Huge pavilions sponsored by Huawei and Google, Honor and Qualcomm, display remarkable new products linking our car to our phone, a robot to a disabled person, our glasses to the internet. Governments keen for influence and investment jostle for space with the companies that are hoping to win big in the artificial intelligence revolution.
MWC is also a place for debate. On large stages, the leading minds in the technology world have the conversations often lost among the flashing neon lights and interactive plasma screens. “Move fast and break things,” Mark Zuckerberg said in 2012. Today, the stakes are too high.
We are in a live discussion about the very meaning of intelligence. Demis Hassabis, the founder of DeepMind, has said artificial general intelligence could be with us in as little as five years. In that world, who, or what, will make decisions? Is it a question of human in the loop? Or is it human in the lead? Or no human needed at all? Mo Gawdat, the former chief business officer at Google, has spoken of the risks of “short-term dystopia” as governments, civil society, and regulators struggle to control the effects of machines that can learn and decide.
“What do we mean by intelligence?” Crawford asked. “The history of the term ‘intelligence’ is a troubled one. ?It’s?been used to divide populations, to drive programs about who is valuable and who is not.”
“We’re?trying to compare agents to human intelligence. They’re?actually completely?different. This [intelligence] is statistical probability at scale. These are systems that are following tasks in complex environments. This is?very different? to humans, but that means we need to have?a different set?of questions, which is: what are agents?doing? How can we track that, and how can we better understand the way?it’s?going to change our own workflows and, much more importantly, how we?live?”
As the debate continues about the tensions between OpenAI, Anthropic and the Department for War in America, Crawford asks what are the red lines for agent use? “Imagine agents in the battlefield,” she says. We do not need to. AI-enabled bombing ‘a(chǎn)t the speed of thought’ has been reported to be happening in Iran. One of AI’s functions is ‘decision compression’, shortening time frames between idea and execution. The ‘kill chain’ is reducing.
“You’ve got scale and you’ve got speed, you’re [carrying out the] assassination-style strikes at the same time as you’re decapitating the regime’s ability to respond with all the aerial ballistic missiles,” academic Craig Jones at Newcastle University told The Guardian newspaper in the U.K. “That might have taken days or weeks in historic wars. [Now] you’re doing everything at once.”
Crawford talks of accountability forensics—systems which trace where decisions are made. At the moment, we are suffering from accountability laundering, where no one takes responsibility. In the U.K. civil service—the operational arm of the government—it is known as ‘sloping shoulders syndrome’, where everyone dodges and weaves to avoid responsibility.
“We are seeing a type of shell game where?[people say] ‘is it the designer [who is responsible]? Is it the deployer? Is it the enterprise client? Is it the end user?’ And everyone can say, ‘well, we?don’t?really know yet’. That’s not going to be acceptable,” said Crawford. I think what?we’re?going to start to see in the conversation, particularly with regulators, is a very strong?chain of?accountability?so you know exactly who is responsible when.”
?If half of what was talked about at MWC 2026 comes true, agents will soon be involved in every aspect of our lives. They will be able to read and cache every half-written text, every deleted image, every email that was left in draft, every video recorded on digitally enabled glasses, every conversation recorded. Crawford warned that this “upends privacy as we have known it”.
“We’re at the very beginning of understanding what that looks like,” she said. All the conversations will need to be of substance. And immediate.