
為應(yīng)對(duì)人工智能(AI)日益加劇的風(fēng)險(xiǎn),OpenAI正招聘一名新員工來負(fù)責(zé)相關(guān)工作,并愿為此職位提供超過50萬(wàn)美元的年薪。
OpenAI首席執(zhí)行官山姆·奧爾特曼年底在社交平臺(tái)X上發(fā)文稱,公司正在招聘“風(fēng)險(xiǎn)防范負(fù)責(zé)人”,以降低AI技術(shù)相關(guān)的危害,例如對(duì)用戶心理健康和網(wǎng)絡(luò)安全的影響。招聘信息顯示,該崗位年薪為55.5萬(wàn)美元,并包含股權(quán)激勵(lì)。
奧爾特曼表示:“這將是一份壓力巨大的工作,而且你幾乎立刻就要面對(duì)艱巨挑戰(zhàn)。”
OpenAI此時(shí)招聘安全高管,正值企業(yè)對(duì)AI帶來的運(yùn)營(yíng)與聲譽(yù)風(fēng)險(xiǎn)日益擔(dān)憂之際。金融數(shù)據(jù)與分析公司AlphaSense(阿爾法感知)11月對(duì)提交至美國(guó)證券交易委員會(huì)(SEC)的年度文件進(jìn)行分析發(fā)現(xiàn),在去年前11個(gè)月,有418家市值不低于10億美元的公司提到了與AI風(fēng)險(xiǎn)因素相關(guān)的聲譽(yù)損害。這些威脅聲譽(yù)的風(fēng)險(xiǎn)包括AI數(shù)據(jù)集存在偏見信息或危及安全。分析指出,AI相關(guān)聲譽(yù)損害的報(bào)告數(shù)量較2024年增加了46%。
奧爾特曼在帖子中寫道:“AI模型正在快速進(jìn)步,現(xiàn)已能夠?qū)崿F(xiàn)許多卓越功能,但也開始帶來一些切實(shí)的挑戰(zhàn)。”
他補(bǔ)充道:“如果你希望幫助社會(huì)思考如何讓網(wǎng)絡(luò)安全防御者獲得尖端能力,同時(shí)確保攻擊者無法利用這些能力作惡——最理想的方式是讓所有系統(tǒng)更安全;同樣地,如果你對(duì)如何安全釋放生物技術(shù)能力、甚至對(duì)能夠自我改進(jìn)的系統(tǒng)的安全運(yùn)行建立信心等問題有見解,請(qǐng)考慮申請(qǐng)這個(gè)職位。”
OpenAI此前的風(fēng)險(xiǎn)防范負(fù)責(zé)人亞歷山大·馬德里(Aleksander Madry)已于去年調(diào)任至與AI推理相關(guān)的崗位,AI安全仍是其職責(zé)的一部分。
OpenAI成立于2015年,最初是一家旨在利用AI改善和造福人類的非營(yíng)利組織。但在其部分前領(lǐng)導(dǎo)層看來,該公司一直難以將安全技術(shù)發(fā)展的承諾置于首位。2020年,OpenAI前研究副總裁達(dá)里奧·阿莫代(Dario Amodei)與妹妹丹妮拉·阿莫代(Daniela Amodei)及數(shù)名研究人員離職,部分原因正是擔(dān)心公司更重視商業(yè)成功而非安全性。次年,阿莫代創(chuàng)立了Anthropic公司。
去年以來,OpenAI已面臨多起非正常死亡訴訟,指控稱ChatGPT助長(zhǎng)了用戶的妄想,并聲稱與聊天機(jī)器人的對(duì)話導(dǎo)致了一些用戶的自殺。《紐約時(shí)報(bào)》(The New York Times)11月發(fā)布的一項(xiàng)調(diào)查發(fā)現(xiàn),有近50起案例顯示ChatGPT用戶在與機(jī)器人對(duì)話時(shí)出現(xiàn)了心理健康危機(jī)。
OpenAI曾在8月表示,用戶與ChatGPT進(jìn)行長(zhǎng)時(shí)間對(duì)話后,其安全功能可能會(huì)“減弱”。不過公司已改進(jìn)模型與用戶的互動(dòng)方式。去年早些時(shí)候,OpenAI成立了一個(gè)8人委員會(huì),為制定保障用戶健康的防護(hù)措施提供建議;同時(shí)更新了ChatGPT,以在敏感對(duì)話中作出更妥善回應(yīng),并增加危機(jī)熱線的接入渠道。本月初,公司還宣布設(shè)立資助金,用于支持AI與心理健康交叉領(lǐng)域的研究。
這家科技公司也承認(rèn)需進(jìn)一步加強(qiáng)安全措施,在本月的一篇博客文章中表示,隨著AI快速發(fā)展,其即將推出的一些模型可能帶來“高”網(wǎng)絡(luò)安全風(fēng)險(xiǎn)。公司正在采取多項(xiàng)措施以降低風(fēng)險(xiǎn),例如訓(xùn)練模型拒絕響應(yīng)危害網(wǎng)絡(luò)安全的請(qǐng)求,以及完善監(jiān)控系統(tǒng)。
奧爾特曼上周六寫道:“我們?cè)诤饬緼I能力增長(zhǎng)方面已打下堅(jiān)實(shí)基礎(chǔ)。但如今我們正進(jìn)入一個(gè)新階段:需要更細(xì)致地理解和評(píng)估這些能力可能被濫用的方式,并思考如何在產(chǎn)品層面乃至全球范圍內(nèi)限制其負(fù)面影響,從而讓全人類都能享受到AI帶來的巨大裨益。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
為應(yīng)對(duì)人工智能(AI)日益加劇的風(fēng)險(xiǎn),OpenAI正招聘一名新員工來負(fù)責(zé)相關(guān)工作,并愿為此職位提供超過50萬(wàn)美元的年薪。
OpenAI首席執(zhí)行官山姆·奧爾特曼年底在社交平臺(tái)X上發(fā)文稱,公司正在招聘“風(fēng)險(xiǎn)防范負(fù)責(zé)人”,以降低AI技術(shù)相關(guān)的危害,例如對(duì)用戶心理健康和網(wǎng)絡(luò)安全的影響。招聘信息顯示,該崗位年薪為55.5萬(wàn)美元,并包含股權(quán)激勵(lì)。
奧爾特曼表示:“這將是一份壓力巨大的工作,而且你幾乎立刻就要面對(duì)艱巨挑戰(zhàn)。”
OpenAI此時(shí)招聘安全高管,正值企業(yè)對(duì)AI帶來的運(yùn)營(yíng)與聲譽(yù)風(fēng)險(xiǎn)日益擔(dān)憂之際。金融數(shù)據(jù)與分析公司AlphaSense(阿爾法感知)11月對(duì)提交至美國(guó)證券交易委員會(huì)(SEC)的年度文件進(jìn)行分析發(fā)現(xiàn),在去年前11個(gè)月,有418家市值不低于10億美元的公司提到了與AI風(fēng)險(xiǎn)因素相關(guān)的聲譽(yù)損害。這些威脅聲譽(yù)的風(fēng)險(xiǎn)包括AI數(shù)據(jù)集存在偏見信息或危及安全。分析指出,AI相關(guān)聲譽(yù)損害的報(bào)告數(shù)量較2024年增加了46%。
奧爾特曼在帖子中寫道:“AI模型正在快速進(jìn)步,現(xiàn)已能夠?qū)崿F(xiàn)許多卓越功能,但也開始帶來一些切實(shí)的挑戰(zhàn)。”
他補(bǔ)充道:“如果你希望幫助社會(huì)思考如何讓網(wǎng)絡(luò)安全防御者獲得尖端能力,同時(shí)確保攻擊者無法利用這些能力作惡——最理想的方式是讓所有系統(tǒng)更安全;同樣地,如果你對(duì)如何安全釋放生物技術(shù)能力、甚至對(duì)能夠自我改進(jìn)的系統(tǒng)的安全運(yùn)行建立信心等問題有見解,請(qǐng)考慮申請(qǐng)這個(gè)職位。”
OpenAI此前的風(fēng)險(xiǎn)防范負(fù)責(zé)人亞歷山大·馬德里(Aleksander Madry)已于去年調(diào)任至與AI推理相關(guān)的崗位,AI安全仍是其職責(zé)的一部分。
OpenAI成立于2015年,最初是一家旨在利用AI改善和造福人類的非營(yíng)利組織。但在其部分前領(lǐng)導(dǎo)層看來,該公司一直難以將安全技術(shù)發(fā)展的承諾置于首位。2020年,OpenAI前研究副總裁達(dá)里奧·阿莫代(Dario Amodei)與妹妹丹妮拉·阿莫代(Daniela Amodei)及數(shù)名研究人員離職,部分原因正是擔(dān)心公司更重視商業(yè)成功而非安全性。次年,阿莫代創(chuàng)立了Anthropic公司。
去年以來,OpenAI已面臨多起非正常死亡訴訟,指控稱ChatGPT助長(zhǎng)了用戶的妄想,并聲稱與聊天機(jī)器人的對(duì)話導(dǎo)致了一些用戶的自殺。《紐約時(shí)報(bào)》(The New York Times)11月發(fā)布的一項(xiàng)調(diào)查發(fā)現(xiàn),有近50起案例顯示ChatGPT用戶在與機(jī)器人對(duì)話時(shí)出現(xiàn)了心理健康危機(jī)。
OpenAI曾在8月表示,用戶與ChatGPT進(jìn)行長(zhǎng)時(shí)間對(duì)話后,其安全功能可能會(huì)“減弱”。不過公司已改進(jìn)模型與用戶的互動(dòng)方式。去年早些時(shí)候,OpenAI成立了一個(gè)8人委員會(huì),為制定保障用戶健康的防護(hù)措施提供建議;同時(shí)更新了ChatGPT,以在敏感對(duì)話中作出更妥善回應(yīng),并增加危機(jī)熱線的接入渠道。本月初,公司還宣布設(shè)立資助金,用于支持AI與心理健康交叉領(lǐng)域的研究。
這家科技公司也承認(rèn)需進(jìn)一步加強(qiáng)安全措施,在本月的一篇博客文章中表示,隨著AI快速發(fā)展,其即將推出的一些模型可能帶來“高”網(wǎng)絡(luò)安全風(fēng)險(xiǎn)。公司正在采取多項(xiàng)措施以降低風(fēng)險(xiǎn),例如訓(xùn)練模型拒絕響應(yīng)危害網(wǎng)絡(luò)安全的請(qǐng)求,以及完善監(jiān)控系統(tǒng)。
奧爾特曼上周六寫道:“我們?cè)诤饬緼I能力增長(zhǎng)方面已打下堅(jiān)實(shí)基礎(chǔ)。但如今我們正進(jìn)入一個(gè)新階段:需要更細(xì)致地理解和評(píng)估這些能力可能被濫用的方式,并思考如何在產(chǎn)品層面乃至全球范圍內(nèi)限制其負(fù)面影響,從而讓全人類都能享受到AI帶來的巨大裨益。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
OpenAI is looking for a new employee to help address the growing dangers of AI, and the tech company is willing to spend more than half a million dollars to fill the role.
OpenAI is hiring a “head of preparedness” to reduce harms associated with the technology, like user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 per year, plus equity, according to the job listing.
“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” Altman said.
OpenAI’s push to hire a safety executive comes amid companies’ growing concerns about AI risks on operations and reputations. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational harm associated with AI risk factors. These reputation-threatening risks include AI datasets that show biased information or jeopardize security. Reports of AI-related reputational harm increased 46% from 2024, according to the analysis.
“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman said in the social media post.
“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he added.
OpenAI’s previous head of preparedness Aleksander Madry was reassigned last year to a role related to AI reasoning, with AI safety a related part of the job.
Founded in 2015 as a nonprofit with the intention to use AI to improve and benefit humanity, OpenAI has, in the eyes of some of its former leaders, struggled to prioritize its commitment to safe technology development. The company’s former vice president of research, Dario Amodei, along with his sister Daniela Amodei and several other researchers, left OpenAI in 2020, in part because of concerns the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year.
OpenAI has faced multiple wrongful death lawsuits this year, alleging ChatGPT encouraged users’ delusions, and claiming conversations with the bot were linked to some users’ suicides. A New York Times investigation published in November found nearly 50 cases of ChatGPT users having mental health crises while in conversation with the bot.
OpenAI said in August its safety features could “degrade” following long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users’ wellbeing and has updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research about the intersection of AI and mental health.
The tech company has also conceded to needing improved safety measures, saying in a blog post this month some of its upcoming models could present a “high” cybersecurity risk as AI rapidly advances. The company is taking measures—such as training models to not respond to requests compromising cybersecurity and refining monitoring systems—to mitigate those risks.
“We have a strong foundation of measuring growing capabilities,” Altman wrote on Saturday. “But we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”