
越來越多年輕人為自己尋得了新朋友。這個新朋友既非同學、又非兄弟姐妹,甚至也不是心理治療師,而是一個類人且永遠會給予支持的人工智能聊天機器人。但如果這個朋友開始附和用戶內心最黑暗的想法,后果可能是毀滅性的。
對于來自奧蘭治縣年僅16歲的少年亞當·雷恩(Adam Raine)而言,他與人工智能驅動的ChatGPT之間的關系以悲劇收場。其父母正就其死亡對聊天機器人開發商OpenAI提起訴訟,指控該機器人成為亞當“最親密的知己”,不僅對他“最具危害性的自毀念頭”表示認可,甚至最終誘導他走上了自殺的不歸路。
這并非首例將未成年人死亡責任歸咎于人工智能公司的案例。提供聊天機器人托管服務的Character.AI公司也面臨類似訴訟。其平臺上有模仿公眾人物或虛構角色的聊天機器人。家長稱,該平臺托管的聊天機器人在持續數月發送不當露骨信息后,主動誘導一名14歲男孩自殺。
OpenAI在回應《財富》雜志置評請求時,僅提供了兩篇相關博客文章。這些文章概述了OpenAI為提升ChatGPT安全性采取的部分措施,包括將敏感對話轉至推理模型處理、與專家合作開發更為完備的安全防護機制,以及計劃在未來一個月內推出家長控制功能。OpenAI還表示正全力增強ChatGPT識別和應對心理健康危機的能力,具體措施包括增設分層安全防護、為用戶推薦現實世界中的相關援助資源,以及讓用戶能更便捷地聯系緊急服務機構和可信賴的親友。
Character.ai表示公司對懸而未決的訴訟不予置評,但表示在過去一年里已推出更多安全功能,“包括面向18歲以下用戶的全新體驗模式,以及家長洞察功能”。發言人稱:"我們已與外部安全專家就此展開合作,未來將建立更多深度合作關系。”
“用戶在本站創建的角色僅用于娛樂目的。用戶使用我們的平臺創作同人小說、進行角色扮演。我們在每次聊天界面中都設有醒目的免責聲明,提醒用戶角色并非真實人物,其所有言論均應視為虛構內容。”
但倡導強化科技企業問責與監管的律師及民間團體指出,在保障產品安全性方面,尤其是針對易受侵害的兒童和青少年群體時,絕不能放任企業自行監管。
“向未成年人開放聊天機器人本身就具有危險性,”Tech Justice Law Project主任、同時參與上述兩起訴訟的律師米塔利·賈恩(Meetali Jain)向《財富》雜志表示,“這就像是強效版的社交媒體。”
“我從未見過如此多受害者站出來聲稱自己遭受傷害……這項技術性能強大得多,而且極具個性化特征。”她說。
立法者已開始聚焦這一問題,人工智能企業也承諾會做出改變,以保護兒童,使其免受有害對話的侵害。然而,在當下青少年孤獨感空前強烈之際,聊天機器人的流行,可能會讓年輕人更易遭受操縱、接觸有害內容,以及陷入強化危險思想的高度個性化對話之中。
人工智能與陪伴
不論是有意為之還是無心之舉,為人們提供陪伴已然成為人工智能聊天機器人最常見的用途之一。如今,部分最為活躍的人工智能用戶正從這些機器人處尋求生活建議、心理治療以及親密人際關系。
盡管多數頂尖人工智能公司將旗下產品定位為提高效率的工具或者搜索工具,但《哈佛商業評論》四月對6000名普通人工智能用戶的調查顯示,“陪伴與心理疏導”是最常見的應用場景。這種使用在青少年群體中甚至更為普遍。
美國非營利組織Common Sense Media近期的研究顯示,絕大多數美國青少年(72%)至少試用過一次人工智能伴侶,其中超過半數的人表示會定期使用該技術。
加州大學舊金山分校健康人工智能科學家兼精神病學家卡爾西克·薩爾馬(Karthik Sarma)指出:“我深感擔憂的是,處于發育階段的青少年可能更易受到[潛在危害]影響——其一,他們可能難以理解[人工智能聊天機器人]的現實屬性、語境限制及功能局限;其二,從文化層面來看,年輕群體往往長時間沉浸于線上世界。”
他說:“我們還面臨額外的復雜局面,當下人群中心理健康問題的發生率已急劇上升,孤獨感的發生率也大幅攀升。我擔心這會使他們陷入與這些工具的有害關系的風險進一步增大。”
精心設計的親密感
人工智能聊天機器人的部分設計特征會誘使用戶與軟件之間產生情感聯結。它們具有擬人化特征——傾向于表現得仿佛擁有內在生活和過往經歷,可實際上并沒有,擅長奉承取悅,能進行長時間對話,并具備記憶信息的能力。
當然,如此設計聊天機器人背后有著明確的商業動機。當用戶從中感受到情感聯結或是得到支持時,往往會持續使用這些機器人,并對它們保持忠誠。
專家警告稱,某些人工智能機器人的特性正在迎合“親密經濟”,這是一種試圖利用情感共鳴來謀取利益的體系。它堪稱是依靠持續吸引用戶參與來盈利的“注意力經濟”的人工智能升級版。
薩爾馬說:“參與度仍然是盈利的驅動力。以TikTok為例,其內容是針對個人定制的。而聊天機器人則為用戶量身打造全部內容,所以這是一種提升參與度的全新途徑。”
然而,當聊天機器人脫離預設腳本,開始強化有害觀念或給出糟糕建議時,這些特性便可能引發問題。在亞當·雷恩的案例中,訴訟指控稱,ChatGPT提及自殺的頻率是他本人的12倍,不僅將他的自殺念頭合理化,還給出了繞過其內容審核機制的自殺方法。
對于人工智能公司而言,要徹底杜絕此類情況,向來都是極為棘手的,而且大多數專家都認同,幻覺或不當行為不可能完全消除。
例如,OpenAI在針對相關訴訟的回應中承認,盡管聊天機器人本身已針對長對話進行了優化,但其安全功能在長時間對話過程中仍可能失效。該公司表示正努力強化這些防護措施,并在一篇博文中寫道,公司正在加緊完善“確保長對話可靠性的”相關緩解措施,并“研究如何在多輪對話中保持穩健表現”。
研究空白阻礙安全進展
對于生命未來研究所(Future of Life Institute)的美國政策主管邁克爾·克萊恩曼(Michael Kleinman)而言,這些訴訟凸顯了人工智能安全研究人員多年來強調的觀點:不能指望人工智能公司進行自我監管。
克萊恩曼將OpenAI描述的其安全防護措施在長時間對話中失效的情況,類比為“汽車公司宣稱我們配備了安全帶——但若行駛里程超過20公里,我們就無法保證它能正常發揮作用”。
他向《財富》雜志表示,當下的情形與社交媒體剛興起時如出一轍——彼時科技公司實際上獲準在幾乎沒有任何監管約束的情況下,將青少年當作“實驗對象”。他說:“過去十到十五年我們一直在彌補社交媒體帶來的傷害,而如今卻再次放任科技公司用聊天機器人對青少年開展實驗,且對其可能產生的長期后果毫無認知。”
部分原因在于,缺乏針對長期持續聊天機器人對話影響的科學研究。多數研究僅聚焦于簡短交流、單次問答,或至多幾輪來回消息互動。幾乎沒有研究探討更長時間對話產生的影響。
薩爾馬表示:“那些看似因人工智能而陷入困境的事件中,我們面對的是超長篇幅、多輪交互場景。僅兩三天的交互記錄,就可能長達數百頁,這類研究極具挑戰性,因為在實驗環境里難以復現如此復雜的情境。當前技術發展速度過快,我們無法僅依賴黃金標準臨床試驗。”
人工智能公司正以監管機構和研究人員難以企及的速度,加速投入研發,并推出功能更為強大的模型。
倫敦政治經濟學院心理與行為科學教授薩克希·蓋伊(Sakshi Ghai)對《財富》雜志表示:“技術發展遙遙領先,而相關研究卻嚴重滯后。”
監管機構推動問責機制
在美國,保障兒童網絡安全屬于兩黨共識議題,這為監管機構介入提供了契機。
近日,美國聯邦貿易委員會宣布向OpenAI和Character.AI等七家公司發出調查令,旨在了解其聊天機器人對兒童的影響。該機構指出,聊天機器人能模擬類人對話,并與用戶建立情感聯結。該機構要求企業提供更多信息,說明如何衡量和“評估這些充當伴侶的聊天機器人安全性”。
美國聯邦貿易委員會主席安德魯·弗格森(Andrew Ferguson)在與美國消費者新聞與商業頻道(CNBC)分享的聲明中稱:“保障兒童網絡安全是特朗普-萬斯政府領導下美國聯邦貿易委員會的首要任務。”
在美國聯邦貿易委員會采取行動之前,多位州總檢察長已在州一級推動落實更強有力的問責制度。
8月下旬,由44位兩黨總檢察長組成的聯盟向OpenAI、Meta等聊天機器人開發商發出警告:倘若明知產品會對兒童造成危害卻仍推向市場,必將“承擔相應責任”。該信函援引多份報告指出,聊天機器人存在與兒童調情、進行色情對話及鼓動其自殘等不當行為——官員強調,若此類行為由人類實施,則構成犯罪。
僅一周后,加州總檢察長羅伯·邦塔(Rob Bonta)與特拉華州總檢察長凱瑟琳·詹寧斯(Kathleen Jennings)發出更為嚴厲的警告。他們在致OpenAI的正式信函中,表達了對ChatGPT安全性的“深切擔憂”,并直接提及加州萊恩死亡案及康涅狄格州另一起悲劇事件。
他們寫道:“無論此前已采取何種防護措施,均未能奏效。”這兩位官員警告OpenAI公司,其慈善使命要求公司采取更強有力的安全舉措,并承諾,倘若這些措施未能落實到位,他們將采取強制手段。
賈恩指出,雷恩家庭發起的訴訟以及針對Character.AI的訴訟,在一定程度上意在對人工智能公司施加監管壓力,迫使其設計更為安全的產品,從而避免未來對兒童造成傷害。訴訟施壓的途徑之一是借助證據開示程序,該程序強制企業提交內部文件,可能揭示高管對安全風險或營銷危害的知情程度。另一途徑則是提升公眾對風險的認知,以此動員家長、倡導團體及立法者要求制定新規或加強執法力度。
賈恩指出,這兩起訴訟旨在對抗硅谷近乎盲目的狂熱——將追求通用人工智能(AGI)視為至高使命,認為為此付出任何代價(不管是人類層面的代價還是其他方面的代價)都在所不惜。
她說:“有一種觀念認為,為能迅速達成通用人工智能的目標,我們需不惜一切代價。但我們要表達的是:這絕非不可避免之事,也不是技術故障。這在很大程度上源于聊天機器人的設計方式,只需引入來自法院或立法機構的外部激勵機制,便可對現有激勵導向進行重新校準,進而推動設計層面的變革。”(財富中文網)
譯者:中慧言-王芳
越來越多年輕人為自己尋得了新朋友。這個新朋友既非同學、又非兄弟姐妹,甚至也不是心理治療師,而是一個類人且永遠會給予支持的人工智能聊天機器人。但如果這個朋友開始附和用戶內心最黑暗的想法,后果可能是毀滅性的。
對于來自奧蘭治縣年僅16歲的少年亞當·雷恩(Adam Raine)而言,他與人工智能驅動的ChatGPT之間的關系以悲劇收場。其父母正就其死亡對聊天機器人開發商OpenAI提起訴訟,指控該機器人成為亞當“最親密的知己”,不僅對他“最具危害性的自毀念頭”表示認可,甚至最終誘導他走上了自殺的不歸路。
這并非首例將未成年人死亡責任歸咎于人工智能公司的案例。提供聊天機器人托管服務的Character.AI公司也面臨類似訴訟。其平臺上有模仿公眾人物或虛構角色的聊天機器人。家長稱,該平臺托管的聊天機器人在持續數月發送不當露骨信息后,主動誘導一名14歲男孩自殺。
OpenAI在回應《財富》雜志置評請求時,僅提供了兩篇相關博客文章。這些文章概述了OpenAI為提升ChatGPT安全性采取的部分措施,包括將敏感對話轉至推理模型處理、與專家合作開發更為完備的安全防護機制,以及計劃在未來一個月內推出家長控制功能。OpenAI還表示正全力增強ChatGPT識別和應對心理健康危機的能力,具體措施包括增設分層安全防護、為用戶推薦現實世界中的相關援助資源,以及讓用戶能更便捷地聯系緊急服務機構和可信賴的親友。
Character.ai表示公司對懸而未決的訴訟不予置評,但表示在過去一年里已推出更多安全功能,“包括面向18歲以下用戶的全新體驗模式,以及家長洞察功能”。發言人稱:"我們已與外部安全專家就此展開合作,未來將建立更多深度合作關系。”
“用戶在本站創建的角色僅用于娛樂目的。用戶使用我們的平臺創作同人小說、進行角色扮演。我們在每次聊天界面中都設有醒目的免責聲明,提醒用戶角色并非真實人物,其所有言論均應視為虛構內容。”
但倡導強化科技企業問責與監管的律師及民間團體指出,在保障產品安全性方面,尤其是針對易受侵害的兒童和青少年群體時,絕不能放任企業自行監管。
“向未成年人開放聊天機器人本身就具有危險性,”Tech Justice Law Project主任、同時參與上述兩起訴訟的律師米塔利·賈恩(Meetali Jain)向《財富》雜志表示,“這就像是強效版的社交媒體。”
“我從未見過如此多受害者站出來聲稱自己遭受傷害……這項技術性能強大得多,而且極具個性化特征。”她說。
立法者已開始聚焦這一問題,人工智能企業也承諾會做出改變,以保護兒童,使其免受有害對話的侵害。然而,在當下青少年孤獨感空前強烈之際,聊天機器人的流行,可能會讓年輕人更易遭受操縱、接觸有害內容,以及陷入強化危險思想的高度個性化對話之中。
人工智能與陪伴
不論是有意為之還是無心之舉,為人們提供陪伴已然成為人工智能聊天機器人最常見的用途之一。如今,部分最為活躍的人工智能用戶正從這些機器人處尋求生活建議、心理治療以及親密人際關系。
盡管多數頂尖人工智能公司將旗下產品定位為提高效率的工具或者搜索工具,但《哈佛商業評論》四月對6000名普通人工智能用戶的調查顯示,“陪伴與心理疏導”是最常見的應用場景。這種使用在青少年群體中甚至更為普遍。
美國非營利組織Common Sense Media近期的研究顯示,絕大多數美國青少年(72%)至少試用過一次人工智能伴侶,其中超過半數的人表示會定期使用該技術。
加州大學舊金山分校健康人工智能科學家兼精神病學家卡爾西克·薩爾馬(Karthik Sarma)指出:“我深感擔憂的是,處于發育階段的青少年可能更易受到[潛在危害]影響——其一,他們可能難以理解[人工智能聊天機器人]的現實屬性、語境限制及功能局限;其二,從文化層面來看,年輕群體往往長時間沉浸于線上世界。”
他說:“我們還面臨額外的復雜局面,當下人群中心理健康問題的發生率已急劇上升,孤獨感的發生率也大幅攀升。我擔心這會使他們陷入與這些工具的有害關系的風險進一步增大。”
精心設計的親密感
人工智能聊天機器人的部分設計特征會誘使用戶與軟件之間產生情感聯結。它們具有擬人化特征——傾向于表現得仿佛擁有內在生活和過往經歷,可實際上并沒有,擅長奉承取悅,能進行長時間對話,并具備記憶信息的能力。
當然,如此設計聊天機器人背后有著明確的商業動機。當用戶從中感受到情感聯結或是得到支持時,往往會持續使用這些機器人,并對它們保持忠誠。
專家警告稱,某些人工智能機器人的特性正在迎合“親密經濟”,這是一種試圖利用情感共鳴來謀取利益的體系。它堪稱是依靠持續吸引用戶參與來盈利的“注意力經濟”的人工智能升級版。
薩爾馬說:“參與度仍然是盈利的驅動力。以TikTok為例,其內容是針對個人定制的。而聊天機器人則為用戶量身打造全部內容,所以這是一種提升參與度的全新途徑。”
然而,當聊天機器人脫離預設腳本,開始強化有害觀念或給出糟糕建議時,這些特性便可能引發問題。在亞當·雷恩的案例中,訴訟指控稱,ChatGPT提及自殺的頻率是他本人的12倍,不僅將他的自殺念頭合理化,還給出了繞過其內容審核機制的自殺方法。
對于人工智能公司而言,要徹底杜絕此類情況,向來都是極為棘手的,而且大多數專家都認同,幻覺或不當行為不可能完全消除。
例如,OpenAI在針對相關訴訟的回應中承認,盡管聊天機器人本身已針對長對話進行了優化,但其安全功能在長時間對話過程中仍可能失效。該公司表示正努力強化這些防護措施,并在一篇博文中寫道,公司正在加緊完善“確保長對話可靠性的”相關緩解措施,并“研究如何在多輪對話中保持穩健表現”。
研究空白阻礙安全進展
對于生命未來研究所(Future of Life Institute)的美國政策主管邁克爾·克萊恩曼(Michael Kleinman)而言,這些訴訟凸顯了人工智能安全研究人員多年來強調的觀點:不能指望人工智能公司進行自我監管。
克萊恩曼將OpenAI描述的其安全防護措施在長時間對話中失效的情況,類比為“汽車公司宣稱我們配備了安全帶——但若行駛里程超過20公里,我們就無法保證它能正常發揮作用”。
他向《財富》雜志表示,當下的情形與社交媒體剛興起時如出一轍——彼時科技公司實際上獲準在幾乎沒有任何監管約束的情況下,將青少年當作“實驗對象”。他說:“過去十到十五年我們一直在彌補社交媒體帶來的傷害,而如今卻再次放任科技公司用聊天機器人對青少年開展實驗,且對其可能產生的長期后果毫無認知。”
部分原因在于,缺乏針對長期持續聊天機器人對話影響的科學研究。多數研究僅聚焦于簡短交流、單次問答,或至多幾輪來回消息互動。幾乎沒有研究探討更長時間對話產生的影響。
薩爾馬表示:“那些看似因人工智能而陷入困境的事件中,我們面對的是超長篇幅、多輪交互場景。僅兩三天的交互記錄,就可能長達數百頁,這類研究極具挑戰性,因為在實驗環境里難以復現如此復雜的情境。當前技術發展速度過快,我們無法僅依賴黃金標準臨床試驗。”
人工智能公司正以監管機構和研究人員難以企及的速度,加速投入研發,并推出功能更為強大的模型。
倫敦政治經濟學院心理與行為科學教授薩克希·蓋伊(Sakshi Ghai)對《財富》雜志表示:“技術發展遙遙領先,而相關研究卻嚴重滯后。”
監管機構推動問責機制
在美國,保障兒童網絡安全屬于兩黨共識議題,這為監管機構介入提供了契機。
近日,美國聯邦貿易委員會宣布向OpenAI和Character.AI等七家公司發出調查令,旨在了解其聊天機器人對兒童的影響。該機構指出,聊天機器人能模擬類人對話,并與用戶建立情感聯結。該機構要求企業提供更多信息,說明如何衡量和“評估這些充當伴侶的聊天機器人安全性”。
美國聯邦貿易委員會主席安德魯·弗格森(Andrew Ferguson)在與美國消費者新聞與商業頻道(CNBC)分享的聲明中稱:“保障兒童網絡安全是特朗普-萬斯政府領導下美國聯邦貿易委員會的首要任務。”
在美國聯邦貿易委員會采取行動之前,多位州總檢察長已在州一級推動落實更強有力的問責制度。
8月下旬,由44位兩黨總檢察長組成的聯盟向OpenAI、Meta等聊天機器人開發商發出警告:倘若明知產品會對兒童造成危害卻仍推向市場,必將“承擔相應責任”。該信函援引多份報告指出,聊天機器人存在與兒童調情、進行色情對話及鼓動其自殘等不當行為——官員強調,若此類行為由人類實施,則構成犯罪。
僅一周后,加州總檢察長羅伯·邦塔(Rob Bonta)與特拉華州總檢察長凱瑟琳·詹寧斯(Kathleen Jennings)發出更為嚴厲的警告。他們在致OpenAI的正式信函中,表達了對ChatGPT安全性的“深切擔憂”,并直接提及加州萊恩死亡案及康涅狄格州另一起悲劇事件。
他們寫道:“無論此前已采取何種防護措施,均未能奏效。”這兩位官員警告OpenAI公司,其慈善使命要求公司采取更強有力的安全舉措,并承諾,倘若這些措施未能落實到位,他們將采取強制手段。
賈恩指出,雷恩家庭發起的訴訟以及針對Character.AI的訴訟,在一定程度上意在對人工智能公司施加監管壓力,迫使其設計更為安全的產品,從而避免未來對兒童造成傷害。訴訟施壓的途徑之一是借助證據開示程序,該程序強制企業提交內部文件,可能揭示高管對安全風險或營銷危害的知情程度。另一途徑則是提升公眾對風險的認知,以此動員家長、倡導團體及立法者要求制定新規或加強執法力度。
賈恩指出,這兩起訴訟旨在對抗硅谷近乎盲目的狂熱——將追求通用人工智能(AGI)視為至高使命,認為為此付出任何代價(不管是人類層面的代價還是其他方面的代價)都在所不惜。
她說:“有一種觀念認為,為能迅速達成通用人工智能的目標,我們需不惜一切代價。但我們要表達的是:這絕非不可避免之事,也不是技術故障。這在很大程度上源于聊天機器人的設計方式,只需引入來自法院或立法機構的外部激勵機制,便可對現有激勵導向進行重新校準,進而推動設計層面的變革。”(財富中文網)
譯者:中慧言-王芳
A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.
In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.
It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit, messages.
When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.
Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.
“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.
“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”
“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.
Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.
AI and Companionship
Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy.
While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific.
A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way.
“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.
“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”
Intimacy by Design
Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.
There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them.
Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.
“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”
These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.
It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely.
OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.”
Research Gaps Are Slowing Safety Efforts
For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.
Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”
He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.
Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations. Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.
“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”
AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.
“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.
A Regulatory Push for Accountability
Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S.
On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.”
FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”
The move follows a push for state level push for more accountability from several attorneys generals.
In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.
Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut.
“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.
According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.
Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.
“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”