
OpenAI稱,為應對超級智能技術帶來的劇烈變革,從稅收制度到工作時長等方方面面,全世界都要重新規劃。所謂超級智能,是指人工智能系統超越最頂尖人類的階段。
本周一,在題為《智能時代的產業政策》(Industrial Policy for the Intelligence Age)的13頁文件中,OpenAI希望以一系列“以人為本的政策主張”啟動相關討論。然而,對OpenAI的表態與動機該抱有多大信任,似乎是眾多讀者關注的核心問題。文件發布同一天,《紐約客》(The New Yorker)雜志刊出了長達一年半的深度調查結果,對奧爾特曼在人工智能安全等諸多問題上的可信度提出了質疑。
文件由OpenAI全球事務團隊撰寫,梳理了超級智能預計將帶來的多項經濟影響,也提出了各種應對思路。“我們提出這些建議,并非當作完整或最終方案,而是希望以此作為起點,歡迎各界通過民主程序在此基礎上加以完善、挑戰或取舍,”文件開篇博客文章稱。
文件中自稱的一系列政策主張涵蓋公共財富基金、縮短每周工時等議題,恐怕難以讓公眾放心,因為公眾對人工智能驅動變革的速度和后果越發焦慮和不滿。總部位于華盛頓特區的美洲開發銀行(American Development Bank)高級經濟學家兼人工智能政策負責人,曾任聯合國數字與新興技術辦公室人工智能政策主管的露西婭·維拉斯科表示,OpenAI顯然是這場持續討論中最不中立的一方,也是文件的核心矛盾所在。
“OpenAI是這場對話結果最大的利益相關方,其提出的方案會塑造出一種環境,讓OpenAI在自己參與制定的約束下擁有極大的行動自由,”她表示,這并非否定這份文件的理由,但“確實要警惕,這場由OpenAI發起的討論,不能最后又由自己說了算。”
她也強調,OpenAI關于各國政府在人工智能政策應對上已然滯后的判斷是正確的。“多數國家仍將人工智能當成技術問題,實際上是結構性經濟變革,需要配套的產業政策,”她說,“這點很有價值,即便只是起點,這份文件作為設定議程的嘗試也理應得到認真對待。”
獨立人工智能政策顧問索里貝爾·費利茲曾擔任美國參議院高級人工智能和技術政策顧問,她也認為OpenAI“將相關想法付諸紙面”值得肯定。她表示,美國現有制度與安全保障體系跟不上人工智能研發與落地速度,這一判斷是準確的,“眼下確實需要在這一層面展開討論。
然而她強調,大部分提議的內容并不新鮮:“其中一些核心主張,例如‘廣泛分享發展成果、管控風險、普及使用權’等,從2022年11月ChatGPT問世以來一直是重大人工智能治理對話的框架。”
“2023年至2024年我在美國參議院工作,舉辦了九場人工智能政策論壇,這些內容全都討論過。我筆記里都有!所有內容都早被提過,無一例外,”她在私信里告訴《財富》。“文件中關于公私合作、人工智能素養和勞動者話語權的表述,看起來像是聯合國教科文組織或經合組織的人工智能政策框架報告。想法本身沒錯。問題在于,提出解決方案與建立真正落地的實現機制之間存在巨大鴻溝。”
顯然,這份文件的目標受眾并不是每周使用ChatGPT的數億普通用戶,而是2022年11月ChatGPT發布以來以各種形式推動(或一再拖延)人工智能監管的華盛頓政策制定者。從這個意義上說,有人認為這份文件相較之前的表態已有進步。
“我認為與之前更空泛也流于表面的文件相比,這一份有實質性進步,”Encode AI負責州事務的副總裁兼總法律顧問內森·卡爾文說,“其中一些具體建議,比如審計、事件上報以及政府對部分人工智能應用場景加以限制等,思路都不錯。”
他也指出,OpenAI高管正通過 “引領未來政治行動委員會” 開展游說活動,推動對人工智能行業有利的政策。全球事務負責人克里斯·萊漢被視為相關努力的核心推手,總裁格雷格·布羅克曼則是最大捐資人。
“我希望這份文件標志著OpenAI轉向更具建設性的姿態,而不是一邊支持政策,一邊攻擊推動相關政策的政客,”卡爾文說,特別提到“引領未來”曾游說反對紐約國會候選人亞歷克斯?博雷斯,博雷斯是紐約州《人工智能安全與透明度法案》(RAISE 法案)起草者和主要發起人,該法案近期已由州長凱西?霍赫爾簽署生效。
卡爾文指責OpenAI在加州SB 53法案,即《加州前沿人工智能透明度法案》(California Transparency in Frontier Artificial Intelligence Act)辯論期間使用施壓手段阻撓法案推進。他同時指控,OpenAI利用與埃隆·馬斯克持續的法律糾紛為借口,打壓恐嚇批評者,其中就包括Encode,OpenAI 曾暗示Encode暗中接受馬斯克資助。
OpenAI首席執行官山姆·奧爾特曼在接受美國新聞網站Axios采訪時,將周一的一系列政策主張比作羅斯福新政,但有評論認為,這份文件看起來更像是硅谷的思想實驗,而非羅斯福時代的立法方案,無法憑空落地。
例如,卡內基國際和平基金會科技與國際事務團隊訪問學者安東·萊希特在X上發帖稱,實際上這些主張是根本性社會變革,政治層面推行難度極大。“這些不會自然而然成為替代方案,”他寫道,“從這個角度來看,不過是公關操作,目的是為監管虛無主義打掩護。”
他說,更有意義的做法是,將人工智能行業的政治資金和游說能力轉向推動此類政策議程落地。但他認為,這份文件“表述模糊且時機微妙,讓人很難抱有太高期待。”(財富中文網)
譯者:梁宇
審校:夏林
OpenAI稱,為應對超級智能技術帶來的劇烈變革,從稅收制度到工作時長等方方面面,全世界都要重新規劃。所謂超級智能,是指人工智能系統超越最頂尖人類的階段。
本周一,在題為《智能時代的產業政策》(Industrial Policy for the Intelligence Age)的13頁文件中,OpenAI希望以一系列“以人為本的政策主張”啟動相關討論。然而,對OpenAI的表態與動機該抱有多大信任,似乎是眾多讀者關注的核心問題。文件發布同一天,《紐約客》(The New Yorker)雜志刊出了長達一年半的深度調查結果,對奧爾特曼在人工智能安全等諸多問題上的可信度提出了質疑。
文件由OpenAI全球事務團隊撰寫,梳理了超級智能預計將帶來的多項經濟影響,也提出了各種應對思路。“我們提出這些建議,并非當作完整或最終方案,而是希望以此作為起點,歡迎各界通過民主程序在此基礎上加以完善、挑戰或取舍,”文件開篇博客文章稱。
文件中自稱的一系列政策主張涵蓋公共財富基金、縮短每周工時等議題,恐怕難以讓公眾放心,因為公眾對人工智能驅動變革的速度和后果越發焦慮和不滿。總部位于華盛頓特區的美洲開發銀行(American Development Bank)高級經濟學家兼人工智能政策負責人,曾任聯合國數字與新興技術辦公室人工智能政策主管的露西婭·維拉斯科表示,OpenAI顯然是這場持續討論中最不中立的一方,也是文件的核心矛盾所在。
“OpenAI是這場對話結果最大的利益相關方,其提出的方案會塑造出一種環境,讓OpenAI在自己參與制定的約束下擁有極大的行動自由,”她表示,這并非否定這份文件的理由,但“確實要警惕,這場由OpenAI發起的討論,不能最后又由自己說了算。”
她也強調,OpenAI關于各國政府在人工智能政策應對上已然滯后的判斷是正確的。“多數國家仍將人工智能當成技術問題,實際上是結構性經濟變革,需要配套的產業政策,”她說,“這點很有價值,即便只是起點,這份文件作為設定議程的嘗試也理應得到認真對待。”
獨立人工智能政策顧問索里貝爾·費利茲曾擔任美國參議院高級人工智能和技術政策顧問,她也認為OpenAI“將相關想法付諸紙面”值得肯定。她表示,美國現有制度與安全保障體系跟不上人工智能研發與落地速度,這一判斷是準確的,“眼下確實需要在這一層面展開討論。
然而她強調,大部分提議的內容并不新鮮:“其中一些核心主張,例如‘廣泛分享發展成果、管控風險、普及使用權’等,從2022年11月ChatGPT問世以來一直是重大人工智能治理對話的框架。”
“2023年至2024年我在美國參議院工作,舉辦了九場人工智能政策論壇,這些內容全都討論過。我筆記里都有!所有內容都早被提過,無一例外,”她在私信里告訴《財富》。“文件中關于公私合作、人工智能素養和勞動者話語權的表述,看起來像是聯合國教科文組織或經合組織的人工智能政策框架報告。想法本身沒錯。問題在于,提出解決方案與建立真正落地的實現機制之間存在巨大鴻溝。”
顯然,這份文件的目標受眾并不是每周使用ChatGPT的數億普通用戶,而是2022年11月ChatGPT發布以來以各種形式推動(或一再拖延)人工智能監管的華盛頓政策制定者。從這個意義上說,有人認為這份文件相較之前的表態已有進步。
“我認為與之前更空泛也流于表面的文件相比,這一份有實質性進步,”Encode AI負責州事務的副總裁兼總法律顧問內森·卡爾文說,“其中一些具體建議,比如審計、事件上報以及政府對部分人工智能應用場景加以限制等,思路都不錯。”
他也指出,OpenAI高管正通過 “引領未來政治行動委員會” 開展游說活動,推動對人工智能行業有利的政策。全球事務負責人克里斯·萊漢被視為相關努力的核心推手,總裁格雷格·布羅克曼則是最大捐資人。
“我希望這份文件標志著OpenAI轉向更具建設性的姿態,而不是一邊支持政策,一邊攻擊推動相關政策的政客,”卡爾文說,特別提到“引領未來”曾游說反對紐約國會候選人亞歷克斯?博雷斯,博雷斯是紐約州《人工智能安全與透明度法案》(RAISE 法案)起草者和主要發起人,該法案近期已由州長凱西?霍赫爾簽署生效。
卡爾文指責OpenAI在加州SB 53法案,即《加州前沿人工智能透明度法案》(California Transparency in Frontier Artificial Intelligence Act)辯論期間使用施壓手段阻撓法案推進。他同時指控,OpenAI利用與埃隆·馬斯克持續的法律糾紛為借口,打壓恐嚇批評者,其中就包括Encode,OpenAI 曾暗示Encode暗中接受馬斯克資助。
OpenAI首席執行官山姆·奧爾特曼在接受美國新聞網站Axios采訪時,將周一的一系列政策主張比作羅斯福新政,但有評論認為,這份文件看起來更像是硅谷的思想實驗,而非羅斯福時代的立法方案,無法憑空落地。
例如,卡內基國際和平基金會科技與國際事務團隊訪問學者安東·萊希特在X上發帖稱,實際上這些主張是根本性社會變革,政治層面推行難度極大。“這些不會自然而然成為替代方案,”他寫道,“從這個角度來看,不過是公關操作,目的是為監管虛無主義打掩護。”
他說,更有意義的做法是,將人工智能行業的政治資金和游說能力轉向推動此類政策議程落地。但他認為,這份文件“表述模糊且時機微妙,讓人很難抱有太高期待。”(財富中文網)
譯者:梁宇
審校:夏林
OpenAI says the world needs to rethink everything from the tax system to the length of the workday in order to prepare for the wrenching changes of superintelligence technology—the point at which AI systems are capable of outperforming the smartest humans.
On Monday, in a 13-page paper titled “Industrial Policy for the Intelligence Age,” OpenAI said it wanted to “kick-start” the conversation with a “slate of people-first policy ideas.” How much faith to put in OpenAI’s words and motives, however, seems to be one of the key questions among many of the people reading the paper. The paper was released on the same day that The New Yorker published the results of a lengthy one-and-a-half-year investigation into OpenAI that raised questions about CEO Sam Altman’s trustworthiness on various issues, including AI safety.
Written by the OpenAI global affairs team, the paper outlines many of the expected economic impacts of superintelligence and floats various approaches for addressing them. “We offer them not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process,” said the introductory blog post.
The self-described “slate of ideas” in the document—spanning everything from public wealth funds to shorter workweeks—may not do much to reassure a public increasingly nervous about and disenchanted with the pace and consequences of AI-driven change. And OpenAI, of course, is one of the least neutral parties in this ongoing discussion, which is the core tension of the document, said Lucia Velasco, a senior economist and AI policy leader at D.C.-based Inter-American Development Bank and former head of AI policy at the United Nations Office for Digital and Emerging Technologies.
“OpenAI is the most interested party in how this conversation turns out, and the proposals it advances shape an environment in which OpenAI operates with significant freedom under constraints it has largely helped define,” she said, adding that this wasn’t a reason to dismiss the document, but “it is a reason to ensure that the conversation it is trying to start does not end with the same company that started it.”
Still, she emphasized that OpenAI is correct in saying that governments are behind in advancing policy solutions. “Most are still treating AI as a technology problem when it’s actually a structural economic shift that needs proper industrial policy,” she said. “That‘s a useful contribution, and the document deserves to be taken seriously as an agenda-setting exercise, even if it’s a starting point.”
Soribel Feliz, an independent AI policy advisor who previously served as a senior AI and tech policy advisor for the U.S. Senate, agreed that OpenAI deserves credit for “putting this on paper.” The acknowledgment that both U.S. institutions and safety nets are falling behind AI development and deployment is correct, she said, “and the conversation needs to happen at this level at this moment.”
However, she emphasized that most of what is being proposed is not new: “Some of these pillars—‘share prosperity broadly, mitigate risks, democratize access’—have been the framework for every major AI governance conversation since ChatGPT came out in November 2022.
“I worked in the U.S. Senate in 2023–24, and we had nine AI policy fora sessions where all of this was said. I have it in my handwritten notes! All of this was already said, all of it,” she wrote to Fortune in a direct message. “The language around public-private partnerships, AI literacy, and worker voice reads like it came out of a Unesco or OECD AI policy framework report. The ideas are not wrong. The problem is the gap between naming the solutions and building real mechanisms to achieve them.”
Clearly, the target audience is not its hundreds of millions of weekly ChatGPT users. Instead, it is the Beltway policymakers who have been pushing for AI regulation (or kicking the can down the road) in various forms ever since ChatGPT was released in November 2022. In that sense, some said it represents an improvement over earlier efforts.
“I found this document to genuinely be a real improvement from previous documents that were even more floaty and high-level,” said Nathan Calvin, vice president of state affairs and general counsel of Encode AI. “I think some of the concrete suggestions around things like auditing or incident reporting and government restrictions on certain uses of AI are good ideas.”
But he also pointed to lobbying efforts led by OpenAI executives with the Leading the Future PAC, which lobbies for AI-industry-friendly policies. Global affairs head Chris Lehane is considered a force behind these efforts, while president Greg Brockman has been the biggest donor.
“I hope this document signals a move toward more constructive engagement, instead of attacking politicians pushing the very policies OpenAI is now endorsing,” said Calvin, pointing specifically to Leading the Future’s lobbying against New York congressional candidate Alex Bores, author and primary sponsor of the RAISE Act, the New York AI safety and transparency law recently signed by Gov. Kathy Hochul.
Calvin has also accused OpenAI of using intimidation tactics to undermine California’s SB 53, the California Transparency in Frontier Artificial Intelligence Act, while it was still being debated. He alleged as well that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode, which the company implied was secretly funded by Musk.
Still, while OpenAI CEO Sam Altman compared Monday’s slate of policy ideas to the New Deal in an interview with Axios, some say it reads less like FDR-era legislation and more like a Silicon Valley thought experiment that won’t magically turn into action.
For example, Anton Leicht, a visiting scholar with the Carnegie Endowment’s technology and international affairs team, wrote on X that in reality, the ideas are fundamental societal changes and heavy political lifts. “They’re not just going to emerge as an organic alternative,” he wrote. “On that read, this is comms work to provide cover for regulatory nihilism.”
A better version of this, he said, would be to redirect the AI industry’s political funding and lobbying skills to make progress on this kind of policy agenda. However, he said that the “vague nature and timing” of the document “doesn’t make me too optimistic.”