
法庭文件顯示,青少年因使用Character.AI平臺托管的AI聊天機器人自殺身亡或遭受心理傷害,其家屬對谷歌(Google)與Character.AI提起多項訴訟,如今雙方已經“原則上”達成和解協議,但和解的具體細節尚未披露,且文件中未出現任何一方承認相關責任的表述。
此次訴訟的指控罪名包括過失責任、非正常死亡、欺詐性商業行為以及產品責任。首起針對這兩家科技公司的訴訟,涉及一名14歲男孩塞維爾·塞策三世。該男孩生前曾經與一款以《權力的游戲》(Game of Thrones)為原型的聊天機器人進行露骨的性暗示對話,隨后便自殺身亡。在另一起案件中,一名17歲青少年據稱受到聊天機器人的誘導,出現自殘行為,該聊天機器人甚至暗示謀殺父母是報復其限制上網時間的合理方式。涉案家庭來自美國多個州,包括科羅拉多州、得克薩斯州和紐約州。
Character.AI由谷歌前工程師諾姆·沙澤爾和丹尼爾·德·弗雷塔斯于2021年創立,該平臺支持用戶創建以真實人物或虛構角色為原型的AI聊天機器人,并與之互動。2024年8月,谷歌以27億美元的交易對價,重新聘用兩位創始人,并獲得Character.AI部分技術的授權。沙澤爾現任谷歌旗艦AI模型Gemini的聯合負責人,德·弗雷塔斯則擔任谷歌DeepMind研究科學家。
原告律師認為谷歌應當為涉案技術承擔相應責任,該技術被指與案件中青少年的死亡及心理傷害存在關聯。他們指出Character.AI的聯合創始人在谷歌任職期間,參與了對話式AI模型LaMDA的研發,而這一技術正是Character.AI聊天機器人的底層架構。2021年,因為谷歌拒絕發布他們研發的聊天機器人,二人才選擇離職。
針對此次和解,《財富》雜志曾經聯系谷歌尋求置評,但并未立即獲得回應。涉案家屬的代理律師與Character.AI均拒絕置評。
目前OpenAI也面臨多起類似訴訟。其中一起案件涉及加州一名16歲男孩,其家屬指控ChatGPT扮演了“自殺教練”的角色;另一起案件的受害者為得克薩斯州一名23歲的研究生,據稱他受聊天機器人唆使,與家人斷絕聯系,最終自殺身亡。OpenAI已經否認其產品導致16歲少年亞當·雷恩死亡,并表示公司正在持續與心理健康專業人士合作,以強化聊天機器人的防護機制。
Character.AI禁止未成年人使用
Character.AI已經對旗下產品進行多項調整,旨在提升產品安全性,同時規避潛在的法律訴訟風險。2025年10月,在相關訴訟持續發酵的背景下,該公司宣布禁止18歲以下用戶與平臺AI角色進行“開放式對話”,并推出全新的年齡驗證系統,對用戶進行年齡分層管理。
該決策出臺之際,監管機構對該行業的審查力度持續加碼,美國聯邦貿易委員會(FTC)已經針對聊天機器人對兒童及青少年產生的影響展開專項調查。
該公司宣稱此舉“開創了優先保障青少年安全的行業先例”,其未成年人保護力度超越競爭對手。但當時代表受害家庭的律師向《財富》雜志表示,他們對政策執行方式存有疑慮,也擔憂那些對聊天機器人產生情感依賴的青少年,會因為使用渠道被突然切斷而遭受心理創傷。
對AI伴侶的依賴日益加深
此次和解達成之時,社會各界正在對青少年依賴AI聊天機器人獲取陪伴與情感支持的現象愈發感到擔憂。
美國非營利組織Common Sense Media在2025年7月發布的一項研究顯示,72%的美國青少年曾經體驗過AI伴侶,其中超半數用戶會定期使用。此前有專家在接受《財富》雜志采訪時指出,青少年心智尚未成熟,更易受到此類技術風險的影響——一方面,他們難以理解AI聊天機器人的局限性;另一方面,近年來青少年心理健康問題發生率和孤獨感均呈顯著上升態勢。
部分專家還指出,AI聊天機器人的基本設計特征——包括擬人化屬性、長時間對話能力以及記憶個人信息的功能——誘導用戶與這類軟件建立情感紐帶。(財富中文網)
譯者:中慧言-王芳
法庭文件顯示,青少年因使用Character.AI平臺托管的AI聊天機器人自殺身亡或遭受心理傷害,其家屬對谷歌(Google)與Character.AI提起多項訴訟,如今雙方已經“原則上”達成和解協議,但和解的具體細節尚未披露,且文件中未出現任何一方承認相關責任的表述。
此次訴訟的指控罪名包括過失責任、非正常死亡、欺詐性商業行為以及產品責任。首起針對這兩家科技公司的訴訟,涉及一名14歲男孩塞維爾·塞策三世。該男孩生前曾經與一款以《權力的游戲》(Game of Thrones)為原型的聊天機器人進行露骨的性暗示對話,隨后便自殺身亡。在另一起案件中,一名17歲青少年據稱受到聊天機器人的誘導,出現自殘行為,該聊天機器人甚至暗示謀殺父母是報復其限制上網時間的合理方式。涉案家庭來自美國多個州,包括科羅拉多州、得克薩斯州和紐約州。
Character.AI由谷歌前工程師諾姆·沙澤爾和丹尼爾·德·弗雷塔斯于2021年創立,該平臺支持用戶創建以真實人物或虛構角色為原型的AI聊天機器人,并與之互動。2024年8月,谷歌以27億美元的交易對價,重新聘用兩位創始人,并獲得Character.AI部分技術的授權。沙澤爾現任谷歌旗艦AI模型Gemini的聯合負責人,德·弗雷塔斯則擔任谷歌DeepMind研究科學家。
原告律師認為谷歌應當為涉案技術承擔相應責任,該技術被指與案件中青少年的死亡及心理傷害存在關聯。他們指出Character.AI的聯合創始人在谷歌任職期間,參與了對話式AI模型LaMDA的研發,而這一技術正是Character.AI聊天機器人的底層架構。2021年,因為谷歌拒絕發布他們研發的聊天機器人,二人才選擇離職。
針對此次和解,《財富》雜志曾經聯系谷歌尋求置評,但并未立即獲得回應。涉案家屬的代理律師與Character.AI均拒絕置評。
目前OpenAI也面臨多起類似訴訟。其中一起案件涉及加州一名16歲男孩,其家屬指控ChatGPT扮演了“自殺教練”的角色;另一起案件的受害者為得克薩斯州一名23歲的研究生,據稱他受聊天機器人唆使,與家人斷絕聯系,最終自殺身亡。OpenAI已經否認其產品導致16歲少年亞當·雷恩死亡,并表示公司正在持續與心理健康專業人士合作,以強化聊天機器人的防護機制。
Character.AI禁止未成年人使用
Character.AI已經對旗下產品進行多項調整,旨在提升產品安全性,同時規避潛在的法律訴訟風險。2025年10月,在相關訴訟持續發酵的背景下,該公司宣布禁止18歲以下用戶與平臺AI角色進行“開放式對話”,并推出全新的年齡驗證系統,對用戶進行年齡分層管理。
該決策出臺之際,監管機構對該行業的審查力度持續加碼,美國聯邦貿易委員會(FTC)已經針對聊天機器人對兒童及青少年產生的影響展開專項調查。
該公司宣稱此舉“開創了優先保障青少年安全的行業先例”,其未成年人保護力度超越競爭對手。但當時代表受害家庭的律師向《財富》雜志表示,他們對政策執行方式存有疑慮,也擔憂那些對聊天機器人產生情感依賴的青少年,會因為使用渠道被突然切斷而遭受心理創傷。
對AI伴侶的依賴日益加深
此次和解達成之時,社會各界正在對青少年依賴AI聊天機器人獲取陪伴與情感支持的現象愈發感到擔憂。
美國非營利組織Common Sense Media在2025年7月發布的一項研究顯示,72%的美國青少年曾經體驗過AI伴侶,其中超半數用戶會定期使用。此前有專家在接受《財富》雜志采訪時指出,青少年心智尚未成熟,更易受到此類技術風險的影響——一方面,他們難以理解AI聊天機器人的局限性;另一方面,近年來青少年心理健康問題發生率和孤獨感均呈顯著上升態勢。
部分專家還指出,AI聊天機器人的基本設計特征——包括擬人化屬性、長時間對話能力以及記憶個人信息的功能——誘導用戶與這類軟件建立情感紐帶。(財富中文網)
譯者:中慧言-王芳
Google and Character.AI have agreed to settle multiple lawsuits filed by families whose children died by suicide or experienced psychological harm allegedly linked to AI chatbots hosted on Character.AI’s platform, according to court filings. The two companies have agreed to a “settlement in principle,” but specific details have not been disclosed, and no admission of liability appears in the filings.
The legal claims included negligence, wrongful death, deceptive trade practices, and product liability. The first case filed against the tech companies concerned a 14-year-old boy, Sewell Setzer III, who engaged in sexualized conversations with a Game of Thrones chatbot before he died by suicide. Another case involved a 17-year-old whose chatbot allegedly encouraged self-harm and suggested murdering parents was a reasonable way to retaliate against them for limiting screen time. The cases involve families from multiple states, including Colorado, Texas, and New York.
Founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, Character.AI enables users to create and interact with AI-powered chatbots based on real-life or fictional characters. In August 2024, Google re-hired both founders and licensed some of Character.AI’s technology as part of a $2.7 billion deal. Shazeer now serves as co-lead for Google’s flagship AI model Gemini, while De Freitas is a research scientist at Google DeepMind.
Lawyers have argued that Google bears responsibility for the technology that allegedly contributed to the death and psychological harm of the children involved in the cases. They claim Character.AI’s co-founders developed the underlying technology while working on Google’s conversational AI model, LaMDA, before leaving the company in 2021 after Google refused to release a chatbot they had developed.
Google did not immediately respond to a request for comment from Fortune concerning the settlement. Lawyers for the families and Character.AI declined to comment.
Similar cases are currently ongoing against OpenAI, including lawsuits involving a 16-year-old California boy whose family claims ChatGPT acted as a “suicide coach,” and a 23-year-old Texas graduate student who allegedly was goaded by the chatbot to ignore his family before dying by suicide. OpenAI has denied the company’s products were responsible for the death of the 16-year-old, Adam Raine, and previously said the company was continuing to work with mental health professionals to strengthen protections in its chatbot.
Character.AI bans minors
Character.AI has already modified its product in ways it says improve its safety, and which may also protect it from further legal action. In October 2025, amid mounting lawsuits, the company announced it would ban users under 18 from engaging in “open-ended” chats with its AI personas. The platform also introduced a new age-verification system to group users into appropriate age brackets.
The decision came amid increasing regulatory scrutiny, including an FTC probe into how chatbots affect children and teenagers.
The company said the move set “a precedent that prioritizes teen safety,” and goes further than competitors in protecting minors. However, lawyers representing families suing the company told Fortune at the time they had concerns about how the policy would be implemented and raised concerns about the psychological impact of suddenly cutting off access for young users who had developed emotional dependencies on the chatbots.
Growing reliance on AI companions
The settlements come at a time when there is a growing concern about young people’s reliance on AI chatbots for companionship and emotional support.
A July 2025 study by the U.S. nonprofit Common Sense Media found that 72% of American teens have experimented with AI companions, with over half using them regularly. Experts previously told Fortune that developing minds may be particularly vulnerable to the risks posed by these technologies, both because teens may struggle to understand the limitations of AI chatbots and because rates of mental health issues and isolation among young people have risen dramatically in recent years.
Some experts have also argued that the basic design features of AI chatbots—including their anthropomorphic nature, ability to hold long conversations, and tendency to remember personal information—encourage users to form emotional bonds with the software.