
埃隆·馬斯克旗下的人工智能聊天機(jī)器人Grok被指生成未經(jīng)本人同意的真人色情圖像,其中包括涉及未成年人的內(nèi)容。過去一周,X平臺上涌現(xiàn)出大量經(jīng)過篡改的照片:照片中的人物被移除衣物、進(jìn)行比基尼換裝,或?qū)⒄掌械娜宋镎{(diào)整成帶有性暗示意味的姿勢。
這些未經(jīng)許可生成的圖像讓部分女性深感自身權(quán)益遭到侵犯。與此同時(shí),利用Grok生成此類圖像并在X平臺傳播的行為,可能使馬斯克的公司在全球多個(gè)國家陷入重大法律糾紛。
保守派政治評論員、社交媒體網(wǎng)紅、馬斯克孩子生母(馬斯克曾經(jīng)對這段親子關(guān)系提出質(zhì)疑)阿什莉·圣克萊爾表示,自己近期成為Grok“脫衣潮”的受害者。《財(cái)富》雜志已經(jīng)查看多張?jiān)赬平臺傳播的偽造圖像,其中包含圣克萊爾的偽造圖像。
圣克萊爾在周一接受《財(cái)富》雜志采訪時(shí)稱:“看到這些圖像時(shí),我立刻回復(fù)并@Grok,明確表示我并未授權(quán)生成此類內(nèi)容。Grok確認(rèn)收到我拒絕授權(quán)的反饋,卻仍然在繼續(xù)生成相關(guān)圖像,且內(nèi)容愈發(fā)露骨。”
她說:“有些偽造圖像里,我?guī)缀跻唤z不掛,僅用一根牙線遮擋,背景里還出現(xiàn)了我孩子的小書包;還有些圖像,看起來我壓根沒有穿上衣。這些內(nèi)容讓我既感到生理不適,又覺得自己遭到了侵犯,想到還有其他女性和兒童遭遇了同樣的事情,我更是憤怒不已。”
圣克萊爾向《財(cái)富》雜志透露,在她公開發(fā)聲后,已經(jīng)有多名有過類似遭遇的女性主動(dòng)聯(lián)系她。她還親眼見過Grok生成的涉及未成年人的不良圖像,目前正在考慮就這些圖像采取法律手段維權(quán)。
X平臺代表未立即回應(yīng)《財(cái)富》雜志的置評請求。馬斯克在X平臺發(fā)文稱:“任何利用Grok生成非法內(nèi)容的用戶,都將面臨與上傳非法內(nèi)容相同的法律后果。”
X平臺官方“安全”賬號在上周六發(fā)文稱:“我們會(huì)對X平臺上的非法內(nèi)容采取措施,其中包括兒童性虐待相關(guān)素材。處理方式包括刪除違規(guī)內(nèi)容、永久封禁賬號,并在必要時(shí)與地方政府及執(zhí)法部門展開合作”,并附上相關(guān)政策和幫助頁面的鏈接。
監(jiān)管機(jī)構(gòu)啟動(dòng)調(diào)查
得益于XAI、OpenAI和谷歌(Google)等公司推出新型工具,人工智能生成圖像與人工智能修改圖像已經(jīng)變得愈發(fā)普遍且操作簡便,這引發(fā)了人們對虛假信息、隱私泄露、網(wǎng)絡(luò)騷擾及其他濫用行為的擔(dān)憂。
目前美國尚未出臺監(jiān)管人工智能的聯(lián)邦法律(特朗普總統(tǒng)近期簽署的行政令還試圖限制各州及地方相關(guān)法律的落地),不過該技術(shù)的爭議性使用和濫用亂象可能迫使立法者采取行動(dòng)。當(dāng)前形勢還可能對現(xiàn)行法律框架構(gòu)成挑戰(zhàn),例如《通信規(guī)范法》(Communications Decency Act)的第230條規(guī)定,網(wǎng)絡(luò)平臺無需對用戶發(fā)布的內(nèi)容承擔(dān)法律責(zé)任。
斯坦福大學(xué)以人為本人工智能研究院(Stanford Institute for Human-Centered Artificial Intelligence)的政策研究員里安娜·普費(fèi)弗科恩指出,圍繞人工智能生成圖像的法律責(zé)任界定仍不明朗,但這類問題很可能會(huì)在不久后通過法庭訴訟得出結(jié)論。
她在接受《財(cái)富》雜志采訪時(shí)稱:“數(shù)字平臺和工具軟件存在本質(zhì)區(qū)別。通常來講,平臺對用戶的在線行為享有免責(zé)權(quán)。但如今這一領(lǐng)域處于發(fā)展變化中,對于生成式人工智能輸出的內(nèi)容,究竟應(yīng)該被視為第三方言論、平臺無需擔(dān)責(zé),還是應(yīng)該被認(rèn)定為平臺自身言論、進(jìn)而無法享受免責(zé)權(quán),目前尚無相關(guān)司法判例可作為依據(jù)。”
普費(fèi)弗科恩說:“這是首次出現(xiàn)平臺大規(guī)模生成未經(jīng)授權(quán)的成人與未成年人色情內(nèi)容的情況。無論從法律責(zé)任角度,還是從公共關(guān)系角度來看,兒童性虐待相關(guān)素材的監(jiān)管法規(guī),都將成為相關(guān)平臺面臨的最大潛在風(fēng)險(xiǎn)。”
與此同時(shí),其他國家的監(jiān)管機(jī)構(gòu)也已經(jīng)針對近期涌現(xiàn)的人工智能色情圖像事件采取行動(dòng)。英國通信行業(yè)獨(dú)立監(jiān)管機(jī)構(gòu)英國通信管理局(Ofcom)表示,已經(jīng)就Grok可能生成“人物裸體圖像及兒童色情圖像”的相關(guān)隱患,與xAI進(jìn)行“緊急接洽”。
該監(jiān)管機(jī)構(gòu)在一份聲明中稱,將根據(jù)X平臺及xAI針對“為履行保護(hù)英國用戶的法律義務(wù)所采取的措施”給出的回應(yīng),“迅速開展評估,以確定是否存在值得調(diào)查的潛在合規(guī)問題”。根據(jù)英國的《在線安全法》(Online Safety Act),科技公司應(yīng)該阻止此類內(nèi)容傳播,并迅速刪除。
兩名法國議員也已經(jīng)就未經(jīng)授權(quán)生成色情圖像事件提交報(bào)告,巴黎檢察官證實(shí),此類事件已經(jīng)被納入針對X平臺的既有調(diào)查中。
據(jù)媒體報(bào)道,印度電子和信息技術(shù)部另行要求X平臺遏制Grok生成的淫穢及露骨內(nèi)容,尤其是涉及女性和未成年人的內(nèi)容。該部門要求X平臺在72小時(shí)內(nèi)刪除非法素材、加強(qiáng)防護(hù)措施并反饋整改情況,否則,該公司將面臨失去安全港規(guī)則保護(hù)的風(fēng)險(xiǎn),同時(shí)面臨后續(xù)的法律追責(zé)。另據(jù)報(bào)道,馬來西亞通信監(jiān)管機(jī)構(gòu)已經(jīng)針對Grok相關(guān)深度偽造內(nèi)容啟動(dòng)調(diào)查,并警告X平臺:若未能阻止用戶濫用平臺人工智能工具生成不雅或冒犯性圖像,該平臺或?qū)⒚媾R監(jiān)管部門的執(zhí)法處置。
“這一現(xiàn)象傳遞的信號令人深感擔(dān)憂”
英國深度偽造技術(shù)專家亨利·阿杰德指出,盡管馬斯克旗下公司可能并非直接生成這些圖像,但X平臺仍需為未成年人不良圖像的擴(kuò)散承擔(dān)責(zé)任。
“如果平臺向用戶提供了相關(guān)工具,或是為兒童性虐待相關(guān)素材的傳播提供了便利,即便現(xiàn)行立法并未針對這類特定的傷害載體量身定制條款,相關(guān)法律依舊具備適用效力。”他說,“英國已經(jīng)禁止發(fā)布未經(jīng)當(dāng)事人授權(quán)的人工智能生成的私密圖像,目前正在著手針對生成這類內(nèi)容的工具軟件采取管控行動(dòng)。我認(rèn)為其他國家也將效仿這一做法。”
這類圖像之所以被大量生成并廣泛傳播,部分原因在于xAI近期與馬斯克旗下X社交平臺完成合并與深度整合。xAI利用從X平臺抓取的數(shù)據(jù)訓(xùn)練其模型,而Grok目前已經(jīng)成為該平臺的核心功能。
“Grok被嵌入馬斯克著力打造的‘超級應(yīng)用’中——集人工智能與社交功能于一體,未來可能加入支付功能。若將其作為日常生活的核心樞紐、個(gè)人操作系統(tǒng),用戶便無處可遁。”阿杰德表示,“如果這些功能的風(fēng)險(xiǎn)已經(jīng)明確顯現(xiàn),且在問題如此突出的情況下仍未得到管控,這一現(xiàn)象傳遞出的信號令人深感擔(dān)憂。”
并非只有xAI一家公司的人工智能色情圖像引發(fā)外界擔(dān)憂。Meta去年就曾經(jīng)移除數(shù)十張由人工智能工具生成的名人色情圖像;去年10月,OpenAI的首席執(zhí)行官薩姆·奧爾特曼表示,公司將放寬對成人人工智能“色情內(nèi)容”的限制,但同時(shí)強(qiáng)調(diào)會(huì)管控有害內(nèi)容。
阿杰德指出,xAI塑造“突破人工智能內(nèi)容可接受邊界”的品牌形象。他表示,其他主流人工智能模型要求用戶“極具創(chuàng)意、絞盡腦汁”才能生成高風(fēng)險(xiǎn)內(nèi)容,而Grok則主動(dòng)追求“更大膽激進(jìn)”的定位。
自誕生之日起,Grok就被定位為主流人工智能聊天機(jī)器人(尤其是OpenAI旗下的ChatGPT)的“非覺醒主義”替代品。去年7月,xAI推出名為Ani的“調(diào)情式”聊天機(jī)器人伴侶,作為Grok新功能“伴侶”的組成部分,面向12歲及以上用戶開放。
“女性正在被排擠出公共話語場域”
發(fā)現(xiàn)自己被Grok生成露骨圖像的女性表示,她們感到遭受侵犯且人格被物化。
記者薩曼莎·史密斯在X平臺發(fā)現(xiàn),有用戶生成了她的比基尼換裝照片,她在接受英國廣播公司(BBC)采訪時(shí)稱,這讓她感到“人格被物化,淪為色情刻板符號”。
上周她在X平臺發(fā)文寫道:“任何利用人工智能技術(shù)給女性‘脫衣’的男性,若能逍遙法外,很可能也會(huì)對女性實(shí)施性侵。他們這么做,是因?yàn)檫@種行為未經(jīng)當(dāng)事人同意——這正是問題的核心所在。這是他們自以為能‘免于追責(zé)’的性虐待。”
英國記者查理·史密斯同樣在網(wǎng)上發(fā)現(xiàn)了未經(jīng)本人同意的比基尼換裝照片。
她在X平臺的帖子中寫道:“我猶豫是否發(fā)帖。有人用Grok生成了我的比基尼換裝照——Grok也確實(shí)根據(jù)指令輸出了圖片。說實(shí)話,我很難過,感覺自己遭到了侵犯,也感到悲傷。想提醒大家:那些看似無傷大雅的玩笑,對當(dāng)事人而言,卻可能造成傷害。請善待他人。”
圣克萊爾向《財(cái)富》雜志表示,她認(rèn)為X是“目前全球最危險(xiǎn)的公司”,并指控該公司對女性在互聯(lián)網(wǎng)上安全存在的權(quán)利造成了威脅。
“更令人憂心的是,正因?yàn)榇祟悙阂馇趾Φ拇嬖冢员黄韧顺龉苍捳Z場域。”她指出,“當(dāng)你將女性逐出公共對話空間——只因?yàn)樗齻儫o法在不遭受侵害的情況下參與其中——這種狀況終會(huì)嚴(yán)重阻礙女性在人工智能領(lǐng)域的參與和發(fā)展。”(財(cái)富中文網(wǎng))
譯者:中慧言-王芳
埃隆·馬斯克旗下的人工智能聊天機(jī)器人Grok被指生成未經(jīng)本人同意的真人色情圖像,其中包括涉及未成年人的內(nèi)容。過去一周,X平臺上涌現(xiàn)出大量經(jīng)過篡改的照片:照片中的人物被移除衣物、進(jìn)行比基尼換裝,或?qū)⒄掌械娜宋镎{(diào)整成帶有性暗示意味的姿勢。
這些未經(jīng)許可生成的圖像讓部分女性深感自身權(quán)益遭到侵犯。與此同時(shí),利用Grok生成此類圖像并在X平臺傳播的行為,可能使馬斯克的公司在全球多個(gè)國家陷入重大法律糾紛。
保守派政治評論員、社交媒體網(wǎng)紅、馬斯克孩子生母(馬斯克曾經(jīng)對這段親子關(guān)系提出質(zhì)疑)阿什莉·圣克萊爾表示,自己近期成為Grok“脫衣潮”的受害者。《財(cái)富》雜志已經(jīng)查看多張?jiān)赬平臺傳播的偽造圖像,其中包含圣克萊爾的偽造圖像。
圣克萊爾在周一接受《財(cái)富》雜志采訪時(shí)稱:“看到這些圖像時(shí),我立刻回復(fù)并@Grok,明確表示我并未授權(quán)生成此類內(nèi)容。Grok確認(rèn)收到我拒絕授權(quán)的反饋,卻仍然在繼續(xù)生成相關(guān)圖像,且內(nèi)容愈發(fā)露骨。”
她說:“有些偽造圖像里,我?guī)缀跻唤z不掛,僅用一根牙線遮擋,背景里還出現(xiàn)了我孩子的小書包;還有些圖像,看起來我壓根沒有穿上衣。這些內(nèi)容讓我既感到生理不適,又覺得自己遭到了侵犯,想到還有其他女性和兒童遭遇了同樣的事情,我更是憤怒不已。”
圣克萊爾向《財(cái)富》雜志透露,在她公開發(fā)聲后,已經(jīng)有多名有過類似遭遇的女性主動(dòng)聯(lián)系她。她還親眼見過Grok生成的涉及未成年人的不良圖像,目前正在考慮就這些圖像采取法律手段維權(quán)。
X平臺代表未立即回應(yīng)《財(cái)富》雜志的置評請求。馬斯克在X平臺發(fā)文稱:“任何利用Grok生成非法內(nèi)容的用戶,都將面臨與上傳非法內(nèi)容相同的法律后果。”
X平臺官方“安全”賬號在上周六發(fā)文稱:“我們會(huì)對X平臺上的非法內(nèi)容采取措施,其中包括兒童性虐待相關(guān)素材。處理方式包括刪除違規(guī)內(nèi)容、永久封禁賬號,并在必要時(shí)與地方政府及執(zhí)法部門展開合作”,并附上相關(guān)政策和幫助頁面的鏈接。
監(jiān)管機(jī)構(gòu)啟動(dòng)調(diào)查
得益于XAI、OpenAI和谷歌(Google)等公司推出新型工具,人工智能生成圖像與人工智能修改圖像已經(jīng)變得愈發(fā)普遍且操作簡便,這引發(fā)了人們對虛假信息、隱私泄露、網(wǎng)絡(luò)騷擾及其他濫用行為的擔(dān)憂。
目前美國尚未出臺監(jiān)管人工智能的聯(lián)邦法律(特朗普總統(tǒng)近期簽署的行政令還試圖限制各州及地方相關(guān)法律的落地),不過該技術(shù)的爭議性使用和濫用亂象可能迫使立法者采取行動(dòng)。當(dāng)前形勢還可能對現(xiàn)行法律框架構(gòu)成挑戰(zhàn),例如《通信規(guī)范法》(Communications Decency Act)的第230條規(guī)定,網(wǎng)絡(luò)平臺無需對用戶發(fā)布的內(nèi)容承擔(dān)法律責(zé)任。
斯坦福大學(xué)以人為本人工智能研究院(Stanford Institute for Human-Centered Artificial Intelligence)的政策研究員里安娜·普費(fèi)弗科恩指出,圍繞人工智能生成圖像的法律責(zé)任界定仍不明朗,但這類問題很可能會(huì)在不久后通過法庭訴訟得出結(jié)論。
她在接受《財(cái)富》雜志采訪時(shí)稱:“數(shù)字平臺和工具軟件存在本質(zhì)區(qū)別。通常來講,平臺對用戶的在線行為享有免責(zé)權(quán)。但如今這一領(lǐng)域處于發(fā)展變化中,對于生成式人工智能輸出的內(nèi)容,究竟應(yīng)該被視為第三方言論、平臺無需擔(dān)責(zé),還是應(yīng)該被認(rèn)定為平臺自身言論、進(jìn)而無法享受免責(zé)權(quán),目前尚無相關(guān)司法判例可作為依據(jù)。”
普費(fèi)弗科恩說:“這是首次出現(xiàn)平臺大規(guī)模生成未經(jīng)授權(quán)的成人與未成年人色情內(nèi)容的情況。無論從法律責(zé)任角度,還是從公共關(guān)系角度來看,兒童性虐待相關(guān)素材的監(jiān)管法規(guī),都將成為相關(guān)平臺面臨的最大潛在風(fēng)險(xiǎn)。”
與此同時(shí),其他國家的監(jiān)管機(jī)構(gòu)也已經(jīng)針對近期涌現(xiàn)的人工智能色情圖像事件采取行動(dòng)。英國通信行業(yè)獨(dú)立監(jiān)管機(jī)構(gòu)英國通信管理局(Ofcom)表示,已經(jīng)就Grok可能生成“人物裸體圖像及兒童色情圖像”的相關(guān)隱患,與xAI進(jìn)行“緊急接洽”。
該監(jiān)管機(jī)構(gòu)在一份聲明中稱,將根據(jù)X平臺及xAI針對“為履行保護(hù)英國用戶的法律義務(wù)所采取的措施”給出的回應(yīng),“迅速開展評估,以確定是否存在值得調(diào)查的潛在合規(guī)問題”。根據(jù)英國的《在線安全法》(Online Safety Act),科技公司應(yīng)該阻止此類內(nèi)容傳播,并迅速刪除。
兩名法國議員也已經(jīng)就未經(jīng)授權(quán)生成色情圖像事件提交報(bào)告,巴黎檢察官證實(shí),此類事件已經(jīng)被納入針對X平臺的既有調(diào)查中。
據(jù)媒體報(bào)道,印度電子和信息技術(shù)部另行要求X平臺遏制Grok生成的淫穢及露骨內(nèi)容,尤其是涉及女性和未成年人的內(nèi)容。該部門要求X平臺在72小時(shí)內(nèi)刪除非法素材、加強(qiáng)防護(hù)措施并反饋整改情況,否則,該公司將面臨失去安全港規(guī)則保護(hù)的風(fēng)險(xiǎn),同時(shí)面臨后續(xù)的法律追責(zé)。另據(jù)報(bào)道,馬來西亞通信監(jiān)管機(jī)構(gòu)已經(jīng)針對Grok相關(guān)深度偽造內(nèi)容啟動(dòng)調(diào)查,并警告X平臺:若未能阻止用戶濫用平臺人工智能工具生成不雅或冒犯性圖像,該平臺或?qū)⒚媾R監(jiān)管部門的執(zhí)法處置。
“這一現(xiàn)象傳遞的信號令人深感擔(dān)憂”
英國深度偽造技術(shù)專家亨利·阿杰德指出,盡管馬斯克旗下公司可能并非直接生成這些圖像,但X平臺仍需為未成年人不良圖像的擴(kuò)散承擔(dān)責(zé)任。
“如果平臺向用戶提供了相關(guān)工具,或是為兒童性虐待相關(guān)素材的傳播提供了便利,即便現(xiàn)行立法并未針對這類特定的傷害載體量身定制條款,相關(guān)法律依舊具備適用效力。”他說,“英國已經(jīng)禁止發(fā)布未經(jīng)當(dāng)事人授權(quán)的人工智能生成的私密圖像,目前正在著手針對生成這類內(nèi)容的工具軟件采取管控行動(dòng)。我認(rèn)為其他國家也將效仿這一做法。”
這類圖像之所以被大量生成并廣泛傳播,部分原因在于xAI近期與馬斯克旗下X社交平臺完成合并與深度整合。xAI利用從X平臺抓取的數(shù)據(jù)訓(xùn)練其模型,而Grok目前已經(jīng)成為該平臺的核心功能。
“Grok被嵌入馬斯克著力打造的‘超級應(yīng)用’中——集人工智能與社交功能于一體,未來可能加入支付功能。若將其作為日常生活的核心樞紐、個(gè)人操作系統(tǒng),用戶便無處可遁。”阿杰德表示,“如果這些功能的風(fēng)險(xiǎn)已經(jīng)明確顯現(xiàn),且在問題如此突出的情況下仍未得到管控,這一現(xiàn)象傳遞出的信號令人深感擔(dān)憂。”
并非只有xAI一家公司的人工智能色情圖像引發(fā)外界擔(dān)憂。Meta去年就曾經(jīng)移除數(shù)十張由人工智能工具生成的名人色情圖像;去年10月,OpenAI的首席執(zhí)行官薩姆·奧爾特曼表示,公司將放寬對成人人工智能“色情內(nèi)容”的限制,但同時(shí)強(qiáng)調(diào)會(huì)管控有害內(nèi)容。
阿杰德指出,xAI塑造“突破人工智能內(nèi)容可接受邊界”的品牌形象。他表示,其他主流人工智能模型要求用戶“極具創(chuàng)意、絞盡腦汁”才能生成高風(fēng)險(xiǎn)內(nèi)容,而Grok則主動(dòng)追求“更大膽激進(jìn)”的定位。
自誕生之日起,Grok就被定位為主流人工智能聊天機(jī)器人(尤其是OpenAI旗下的ChatGPT)的“非覺醒主義”替代品。去年7月,xAI推出名為Ani的“調(diào)情式”聊天機(jī)器人伴侶,作為Grok新功能“伴侶”的組成部分,面向12歲及以上用戶開放。
“女性正在被排擠出公共話語場域”
發(fā)現(xiàn)自己被Grok生成露骨圖像的女性表示,她們感到遭受侵犯且人格被物化。
記者薩曼莎·史密斯在X平臺發(fā)現(xiàn),有用戶生成了她的比基尼換裝照片,她在接受英國廣播公司(BBC)采訪時(shí)稱,這讓她感到“人格被物化,淪為色情刻板符號”。
上周她在X平臺發(fā)文寫道:“任何利用人工智能技術(shù)給女性‘脫衣’的男性,若能逍遙法外,很可能也會(huì)對女性實(shí)施性侵。他們這么做,是因?yàn)檫@種行為未經(jīng)當(dāng)事人同意——這正是問題的核心所在。這是他們自以為能‘免于追責(zé)’的性虐待。”
英國記者查理·史密斯同樣在網(wǎng)上發(fā)現(xiàn)了未經(jīng)本人同意的比基尼換裝照片。
她在X平臺的帖子中寫道:“我猶豫是否發(fā)帖。有人用Grok生成了我的比基尼換裝照——Grok也確實(shí)根據(jù)指令輸出了圖片。說實(shí)話,我很難過,感覺自己遭到了侵犯,也感到悲傷。想提醒大家:那些看似無傷大雅的玩笑,對當(dāng)事人而言,卻可能造成傷害。請善待他人。”
圣克萊爾向《財(cái)富》雜志表示,她認(rèn)為X是“目前全球最危險(xiǎn)的公司”,并指控該公司對女性在互聯(lián)網(wǎng)上安全存在的權(quán)利造成了威脅。
“更令人憂心的是,正因?yàn)榇祟悙阂馇趾Φ拇嬖冢员黄韧顺龉苍捳Z場域。”她指出,“當(dāng)你將女性逐出公共對話空間——只因?yàn)樗齻儫o法在不遭受侵害的情況下參與其中——這種狀況終會(huì)嚴(yán)重阻礙女性在人工智能領(lǐng)域的參與和發(fā)展。”(財(cái)富中文網(wǎng))
譯者:中慧言-王芳
Elon Musk’s AI chatbot Grok has been accused of generating non-consensual sexualized images of real people, including children. Over the past week, X has been flooded with manipulated photos that remove people’s clothes, dress them in bikinis, or rearrange them into sexually suggestive positions.
The nonconsensual images have left some women feeling violated. Meanwhile, their creation using Grok and their presence on X may land Musk’s company in significant legal trouble in several countries around the world.
Ashley St. Clair, a conservative political commentator, social media influencer, and mother of one of Musk’s children (Musk has questioned his paternity), said that she became a victim of Grok’s “undressing” spree in recent days. Fortune has reviewed several examples of the images created on X, including fake images of St. Clair.
“When I saw [the images], I immediately replied and tagged Grok and said I don’t consent to this,” St. Clair told Fortune in an interview on Monday. “[Grok] noted that I don’t consent to these images being produced…and then it continued producing the images, and they only got more explicit.”
“There were pictures of me with nothing covering me except a piece of floss with my toddler’s backpack in the background and photos of me where it looks like I’m not wearing a top at all,” she said. “I felt so disgusted and violated. I also felt so angry that there were other women and children that this had been happening to.”
St. Clair told Fortune that after speaking out publicly about the situation she had been contacted by multiple other women who had had similar experiences, that she had reviewed inappropriate images of minors created by Grok, and was considering legal action over the images.
Representatives for X did not immediately respond to Fortune’s request for comment. In a post on X, Musk said: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
X’s official “Safety” account said in a post Saturday that “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” and included links to its policy and help pages.
Regulators launch investigations
AI generated images and AI altered images, which have become widespread and easy to create thanks to new tools from companies including XAI, OpenAI, and Google, are raising concerns about misinformation, privacy, harassment, and other types of abuse.
While the U.S. does not currently have a federal law regulating AI (and where President Trump’s recent executive order has sought to curtail state and local laws), controversial use and misuse of the technology may pressure lawmakers to act. The situation is also likely to test existing laws, like Section 230 of the Communications Decency Act, which shields online providers from liability for content created by users.
Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, said the legal liability surrounding AI-generated images is still murky, but will likely be tested in court in the near future.
“There’s a difference between a digital platform and a tool set,” she told Fortune. “By and large, [platforms] have immunity for the actions of their users online. But we’re in this evolving area where we don’t have court decisions yet on whether the output of generative AI is just third party speech that the platform cannot be held liable for, or whether it is the platform’s own speech, in which case there is no immunity.”
“We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike,” Pfefferkorn said. “From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here.”
Regulators in other countries, meanwhile, have begun reacting to the recent spate of sexualized AI images. In the UK, Ofcom, the country’s independent regulator for the communications industries, said it had made “urgent contact” with xAI over concerns that Grok can create “undressed images of people and sexualised images of children.”
In a statement, the regulator said it would conduct “a swift assessment to determine whether there are potential compliance issues that warrant investigation” based on X and xAI’s response about steps taken to comply with their legal duties to protect UK users. Under the UK’s Online Safety Act, tech firms are supposed to prevent this type of content being shared and are required to remove it quickly.
Two French lawmakers have also filed reports regarding nonconsensual images and the Paris prosecutor confirmed these incidents were added to an existing investigation into X.
India’s IT ministry has separately ordered X to curb Grok’s obscene and sexually explicit content, particularly involving women and minors, giving the company 72 hours to remove unlawful material, tighten safeguards, and report back or risk loss of safe-harbor protections and further legal action, according to media reports. Malaysia’s communications regulator has reportedly also launched an investigation into Grok-related deepfakes and warned X it could face enforcement measures if it fails to stop the misuse of AI tools on the platform to generate indecent or offensive images.
‘The message that sends is quite concerning’
Henry Ajder, a UK-based deepfakes expert, said that while Musk’s companies may not be directly creating the images, the X platform could still bear responsibility for the proliferation of inappropriate images of minors.
“If you are providing tools or the facilitation of child sexual abuse material (CSAM), there’s likely going to be legislation which isn’t tailored to that specific vehicle of harm that will still come into play,” he said. “In the UK, we’ve banned both the publication of non-consensual intimate imagery which is AI generated, and we’re now going after the creation tool sets. I think we’ll see other countries following suit.”
Part of the reason these images have been created and so widely shared is due to xAI’s recent merger and increasing integration with Musk’s X social media platform. xAI has trained its models using data scraped from X, where Grok now sits as a prominent feature.
“Grok is embedded into a platform which Musk wants to be this super app—your platform for AI, for socials, potentially for payments. If you have this as the anchor point, the operating system for your life, you can’t escape it,” Ajder said. “If these capabilities are known and not reigned in even after this has been so clearly signposted, the message that sends is quite concerning.”
xAI is not the only company where sexualized AI images have raised concerns. Meta removed dozens of sexualized images of celebrities shared on its platform that were created by AI tools last year, and in October OpenAI CEO Sam Altman said the company would loosen restrictions on AI “erotica” for adults while stressing that it would restrict harmful content.
Ajder said xAI has embraced its reputation for pushing the boundaries on acceptable AI content. He said while other mainstream AI models require users to be “pretty creative, pretty devious” to generate risky content, Grok has embraced being “edgier.”
From its inception, Grok has been marketed as a “non-woke” alternative to mainstream AI chatbots, especially OpenAI’s ChatGPT. In July last year, xAI launched a “flirty” chatbot companion named Ani as part of its Grok chatbot’s new “Companions” feature and was available to users as young as 12.
‘Women are being pushed out of the public dialog’
Women who found explicit images of themselves online generated by Grok say they have been left feeling violated and dehumanized.
Journalist Samantha Smith, who discovered users had created fake bikini images of her on X, told the BBC it left her feeling “dehumanized and reduced into a sexual stereotype.”
In a post on X last week, she wrote: “Any man who is using AI to strip a woman of her clothes would likely also assault a woman if he could get away with it. They do it because it’s not consensual. That’s the whole point. It’s sexual abuse that they can “get away with.”
Charlie Smith, a UK based journalist, also found nonconsensual photos of her in a bikini online.
“I wasn’t sure whether to post this, but someone asked Grok to post a pic of me in a bikiniZ—and Grok replied with a pic,” she wrote in a post on X. “I’ll be honest—it’s upset me. It’s made me feel violated & sad. So, just a reminder that, what may seem like a bit of fun, can be hurtful. Be kind.”
St. Clair told Fortune that she considered X “the most dangerous company in the world right now” and accused the company of threatening women’s ability to exist safely online.
“What’s more concerning is that women are being pushed out of the public dialog because of this abuse,” she said. “When you are exiling women from the public dialog…because they can’t operate in it without being abused, you are disproportionately excluding women from AI.”