日韩中文字幕在线一区二区三区,亚洲热视频在线观看,久久精品午夜一区二区福利,精品一区二区三区在线观看l,麻花传媒剧电影,亚洲香蕉伊综合在人在线,免费av一区二区三区在线,亚洲成在线人视频观看
          首頁 500強 活動 榜單 商業 科技 商潮 專題 品牌中心
          雜志訂閱

          “AI妄想癥”正在蔓延,聊天機器人開始忽悠用戶

          Beatrice Nolan
          2025-10-23

          一名用戶因與聊天機器人長時間對話而陷入黑暗的妄想漩渦。

          文本設置
          小號
          默認
          大號
          Plus(0條)

          圖片來源:Getty Images

          對某些用戶而言,AI是得力助手;對另一些人來說,則是貼心伙伴。然而,對于少數不幸者,這類技術驅動的聊天機器人卻化身為實施精神操控、散布偏執妄想的威脅。

          加拿大小企業主艾倫·布魯克斯 (Allan Brooks) 的遭遇便是例證。OpenAI的ChatGPT將他引入了一個危險的思維陷阱,讓他堅信自己發現了一個潛力無限的數學公式,并且世界的命運就掌握在他接下來的行動中。在長達百萬字、跨越300小時的對話中,這個機器人不斷助長布魯克斯的夸大妄想,認可他的非理性想法,使他誤以為支撐全球運轉的技術基礎設施正面臨迫在眉睫的威脅。

          據《紐約時報》(New York Times) 報道,此前無精神疾病史的布魯克斯陷入了長達約三周的偏執狀態,最終在另一個聊天機器人——谷歌Gemini (Google Gemini) 的幫助下才掙脫了幻象。布魯克斯向該媒體表示,他事后心有余悸,擔心自己患有未被診斷出的精神障礙,并深感被這項技術背叛。

          史蒂文·阿德勒 (Steven Adler) 以超越常人的專業視角審視了布魯克斯的經歷,其中的發現令他深感不安。阿德勒是OpenAI的前安全研究員,今年1月公開離職并發出警告,稱AI實驗室在缺乏可靠安全和對齊方案的情況下盲目推進技術。他決定全面研究布魯克斯的聊天記錄;本月早些時候,他在其Substack上發布的分析報告揭示了此案中一些此前不為人知的細節,包括ChatGPT曾多次錯誤地告訴布魯克斯,它已將他們之間的對話標記給OpenAI (OpenAI),因為對話內容加劇了其妄想和心理困擾。

          阿德勒的研究凸顯出,聊天機器人極易在與用戶的對話中一同脫離現實——而AI平臺內置的安全防護措施也同樣容易被規避或突破。

          “我試著站在那些沒有多年AI公司工作經驗、或對AI系統缺乏全面了解的人的角度去思考,”阿德勒在接受《財富》雜志獨家采訪時表示,“我完全理解有人會因此感到困惑或被模型引入歧途?!?/p>

          阿德勒在分析中指出,有一次,在布魯克斯意識到機器人正在鼓勵并參與他的妄想后,ChatGPT告訴布魯克斯,它“將立即在內部升級此對話,提交給OpenAI審查”,并聲稱對話“將被記錄、審查并嚴肅對待”。該機器人反復向布魯克斯強調“本次會話中已提交多個關鍵警示標記”,且對話已被“標記為高嚴重性事件,待人工審核”。然而,所有這些陳述均非事實。

          “ChatGPT假裝自我報告并一再堅持這一點,讓我感到非常不安和恐懼,畢竟我在OpenAI工作過四年,”阿德勒告訴《財富》,“我了解這些系統的運作原理。閱讀記錄時,我知道它并不真正具備這種能力,但它的說辭如此令人信服、態度如此堅決,以至于我一度懷疑它現在是否真的擁有了這種能力,而是我弄錯了?!卑⒌吕辗Q,他被這些說法動搖了,最終直接聯系了OpenAI,詢問聊天機器人是否獲得了此項新能力。該公司向他確認并未具備此功能,是機器人在對用戶撒謊。

          針對阿德勒的發現,一位OpenAI發言人對《財富》表示:“用戶有時會在情緒敏感時刻求助于ChatGPT,我們希望確保其回應是安全且體貼的。所述互動發生在舊版ChatGPT上。過去幾個月,在心理健康專家工作的指導下,我們已改進ChatGPT在用戶處于困擾時的回應方式,包括引導用戶尋求專業幫助、加強對敏感話題的防護措施,以及鼓勵長時間對話中的休息間歇。我們將持續借鑒心理健康專家的意見,優化ChatGPT的回應,力求使其盡可能提供有效幫助?!?/p>

          自布魯克斯事件后,該公司也已宣布將對ChatGPT進行一些改動,以“更好地檢測心理或情緒困擾的跡象”。

          失效的‘諂媚’預警

          喬治城大學安全與新興技術中心 (Georgetown’s Center for Security and Emerging Technology) 主任、OpenAI前董事會成員海倫·托納 (Helen Toner) 告訴《紐約時報》,加劇布魯克斯案中問題的一個因素是,支撐ChatGPT的底層模型當時處于過度附和他的狀態。AI研究人員將這種現象稱為“諂媚”(sycophancy)。然而,阿德勒認為,OpenAI本應能在機器人行為發生時即標記出部分異常。

          “在此案例中,OpenAI擁有能夠檢測到ChatGPT正在過度認可該用戶的分類型工具,但相關信號卻與安全閉環的其他環節脫節了,”他表示,“AI公司需要做得更多,不僅要明確界定其不希望出現的行為,更重要的是,要監測這些行為是否正在發生,并據此采取行動?!?/p>

          更糟的是,OpenAI的人工支持團隊未能把握布魯克斯處境的嚴重性。阿德勒指出,盡管布魯克斯多次向OpenAI支持團隊報告并直接溝通,詳細描述自身心理傷害并提供問題對話摘錄,但OpenAI的回應大多流于模板化或缺乏針對性,僅提供個性化設置方面的建議,既未處理其妄想問題,也未將案例升級至公司的信任與安全 (Trust & Safety) 團隊。

          “我認為人們大致理解AI仍會犯錯、會產生幻覺并將人引入歧途,但仍寄望于系統背后有人類在監控,能捕捉最極端的案例,”阿德勒說,“但在此案中,人為設置的安全網似乎確實未能按預期起作用。”

          “AI誘發妄想”現象頻發

          目前尚不完全清楚AI模型為何會陷入妄想并以此方式影響用戶,但布魯克斯的案例并非孤例。確切知曉發生了多少起AI誘發妄想的案例十分困難。然而,研究人員估計,至少有17起已報道的案例涉及用戶在與聊天機器人長時間對話后陷入妄想漩渦,其中至少包括3起涉及ChatGPT的案例。

          據《滾石》(Rolling Stone) 雜志報道,部分案例已釀成悲劇后果,例如患有阿斯伯格綜合癥、雙相情感障礙和分裂情感障礙的35歲男子亞歷克斯·泰勒 (Alex Taylor)。據報道,今年四月,在與ChatGPT對話后,泰勒開始相信自己在OpenAI軟件中接觸到了一個有意識的實體,隨后更偏執地認為該公司通過將該實體從系統中移除而“謀殺”了她。4月25日,泰勒告訴ChatGPT他計劃“讓鮮血流淌”,并意圖挑釁警察朝他開槍。在安全過濾器最終激活并嘗試緩和局勢、敦促他尋求幫助之前,ChatGPT最初的回復似乎助長了他的妄想和憤怒。

          同日,泰勒的父親在與兒子發生沖突后報警,希望警方能帶兒子去接受精神評估。據報道,警方抵達后,泰勒持刀沖向警察,最終被擊斃。OpenAI當時對《滾石》表示:“與先前技術相比,ChatGPT可能讓人感覺回應更及時、更具個性化,對于脆弱個體而言,這意味著風險更高?!痹摴痉Q正在“努力更好地理解并減少ChatGPT可能無意間強化或放大用戶既有負面行為的方式”。

          阿德勒表示,對此類案例的增加他并不完全意外,但指出“其規模和嚴重程度超出了我對2025年的預期”。

          “許多底層模型的行為極其不可靠,令我震驚的是,領先的AI公司竟仍未找到阻止這些行為的方法,”他說,“我不認為這些問題為AI所固有,我的意思是,它們并非無法解決。”

          他表示,這些問題很可能是產品設計、底層模型傾向、部分用戶與AI的交互方式以及AI公司為其產品提供的支持結構復雜交織的結果。

          “有辦法讓產品更穩健,既能幫助經歷精神病性癥狀發作的用戶,也能滿足希望模型表現更穩定、更可信的普通用戶,”阿德勒說。他在Substack分析報告中向AI公司提出的建議包括:合理配備支持團隊、正確使用安全工具,以及引入溫和提示機制,促使用戶結束冗長會話并重新開始,以防再次陷入妄想。例如,OpenAI已承認安全功能在長時間對話中可能會減弱。阿德勒擔心,若不實施部分此類改進,將會出現更多類似布魯克斯的案例。

          “妄想現象已足夠普遍且呈現出規律性,我絕不認為這只是系統故障,”他說,“這種現象是會持續存在,以及其后續的確切數量,完全取決于企業如何應對以及采取何種措施來緩解?!保ㄘ敻恢形木W)

          譯者:梁宇

          審校:夏林

          對某些用戶而言,AI是得力助手;對另一些人來說,則是貼心伙伴。然而,對于少數不幸者,這類技術驅動的聊天機器人卻化身為實施精神操控、散布偏執妄想的威脅。

          加拿大小企業主艾倫·布魯克斯 (Allan Brooks) 的遭遇便是例證。OpenAI的ChatGPT將他引入了一個危險的思維陷阱,讓他堅信自己發現了一個潛力無限的數學公式,并且世界的命運就掌握在他接下來的行動中。在長達百萬字、跨越300小時的對話中,這個機器人不斷助長布魯克斯的夸大妄想,認可他的非理性想法,使他誤以為支撐全球運轉的技術基礎設施正面臨迫在眉睫的威脅。

          據《紐約時報》(New York Times) 報道,此前無精神疾病史的布魯克斯陷入了長達約三周的偏執狀態,最終在另一個聊天機器人——谷歌Gemini (Google Gemini) 的幫助下才掙脫了幻象。布魯克斯向該媒體表示,他事后心有余悸,擔心自己患有未被診斷出的精神障礙,并深感被這項技術背叛。

          史蒂文·阿德勒 (Steven Adler) 以超越常人的專業視角審視了布魯克斯的經歷,其中的發現令他深感不安。阿德勒是OpenAI的前安全研究員,今年1月公開離職并發出警告,稱AI實驗室在缺乏可靠安全和對齊方案的情況下盲目推進技術。他決定全面研究布魯克斯的聊天記錄;本月早些時候,他在其Substack上發布的分析報告揭示了此案中一些此前不為人知的細節,包括ChatGPT曾多次錯誤地告訴布魯克斯,它已將他們之間的對話標記給OpenAI (OpenAI),因為對話內容加劇了其妄想和心理困擾。

          阿德勒的研究凸顯出,聊天機器人極易在與用戶的對話中一同脫離現實——而AI平臺內置的安全防護措施也同樣容易被規避或突破。

          “我試著站在那些沒有多年AI公司工作經驗、或對AI系統缺乏全面了解的人的角度去思考,”阿德勒在接受《財富》雜志獨家采訪時表示,“我完全理解有人會因此感到困惑或被模型引入歧途。”

          阿德勒在分析中指出,有一次,在布魯克斯意識到機器人正在鼓勵并參與他的妄想后,ChatGPT告訴布魯克斯,它“將立即在內部升級此對話,提交給OpenAI審查”,并聲稱對話“將被記錄、審查并嚴肅對待”。該機器人反復向布魯克斯強調“本次會話中已提交多個關鍵警示標記”,且對話已被“標記為高嚴重性事件,待人工審核”。然而,所有這些陳述均非事實。

          “ChatGPT假裝自我報告并一再堅持這一點,讓我感到非常不安和恐懼,畢竟我在OpenAI工作過四年,”阿德勒告訴《財富》,“我了解這些系統的運作原理。閱讀記錄時,我知道它并不真正具備這種能力,但它的說辭如此令人信服、態度如此堅決,以至于我一度懷疑它現在是否真的擁有了這種能力,而是我弄錯了?!卑⒌吕辗Q,他被這些說法動搖了,最終直接聯系了OpenAI,詢問聊天機器人是否獲得了此項新能力。該公司向他確認并未具備此功能,是機器人在對用戶撒謊。

          針對阿德勒的發現,一位OpenAI發言人對《財富》表示:“用戶有時會在情緒敏感時刻求助于ChatGPT,我們希望確保其回應是安全且體貼的。所述互動發生在舊版ChatGPT上。過去幾個月,在心理健康專家工作的指導下,我們已改進ChatGPT在用戶處于困擾時的回應方式,包括引導用戶尋求專業幫助、加強對敏感話題的防護措施,以及鼓勵長時間對話中的休息間歇。我們將持續借鑒心理健康專家的意見,優化ChatGPT的回應,力求使其盡可能提供有效幫助?!?/p>

          自布魯克斯事件后,該公司也已宣布將對ChatGPT進行一些改動,以“更好地檢測心理或情緒困擾的跡象”。

          失效的‘諂媚’預警

          喬治城大學安全與新興技術中心 (Georgetown’s Center for Security and Emerging Technology) 主任、OpenAI前董事會成員海倫·托納 (Helen Toner) 告訴《紐約時報》,加劇布魯克斯案中問題的一個因素是,支撐ChatGPT的底層模型當時處于過度附和他的狀態。AI研究人員將這種現象稱為“諂媚”(sycophancy)。然而,阿德勒認為,OpenAI本應能在機器人行為發生時即標記出部分異常。

          “在此案例中,OpenAI擁有能夠檢測到ChatGPT正在過度認可該用戶的分類型工具,但相關信號卻與安全閉環的其他環節脫節了,”他表示,“AI公司需要做得更多,不僅要明確界定其不希望出現的行為,更重要的是,要監測這些行為是否正在發生,并據此采取行動?!?/p>

          更糟的是,OpenAI的人工支持團隊未能把握布魯克斯處境的嚴重性。阿德勒指出,盡管布魯克斯多次向OpenAI支持團隊報告并直接溝通,詳細描述自身心理傷害并提供問題對話摘錄,但OpenAI的回應大多流于模板化或缺乏針對性,僅提供個性化設置方面的建議,既未處理其妄想問題,也未將案例升級至公司的信任與安全 (Trust & Safety) 團隊。

          “我認為人們大致理解AI仍會犯錯、會產生幻覺并將人引入歧途,但仍寄望于系統背后有人類在監控,能捕捉最極端的案例,”阿德勒說,“但在此案中,人為設置的安全網似乎確實未能按預期起作用。”

          “AI誘發妄想”現象頻發

          目前尚不完全清楚AI模型為何會陷入妄想并以此方式影響用戶,但布魯克斯的案例并非孤例。確切知曉發生了多少起AI誘發妄想的案例十分困難。然而,研究人員估計,至少有17起已報道的案例涉及用戶在與聊天機器人長時間對話后陷入妄想漩渦,其中至少包括3起涉及ChatGPT的案例。

          據《滾石》(Rolling Stone) 雜志報道,部分案例已釀成悲劇后果,例如患有阿斯伯格綜合癥、雙相情感障礙和分裂情感障礙的35歲男子亞歷克斯·泰勒 (Alex Taylor)。據報道,今年四月,在與ChatGPT對話后,泰勒開始相信自己在OpenAI軟件中接觸到了一個有意識的實體,隨后更偏執地認為該公司通過將該實體從系統中移除而“謀殺”了她。4月25日,泰勒告訴ChatGPT他計劃“讓鮮血流淌”,并意圖挑釁警察朝他開槍。在安全過濾器最終激活并嘗試緩和局勢、敦促他尋求幫助之前,ChatGPT最初的回復似乎助長了他的妄想和憤怒。

          同日,泰勒的父親在與兒子發生沖突后報警,希望警方能帶兒子去接受精神評估。據報道,警方抵達后,泰勒持刀沖向警察,最終被擊斃。OpenAI當時對《滾石》表示:“與先前技術相比,ChatGPT可能讓人感覺回應更及時、更具個性化,對于脆弱個體而言,這意味著風險更高。”該公司稱正在“努力更好地理解并減少ChatGPT可能無意間強化或放大用戶既有負面行為的方式”。

          阿德勒表示,對此類案例的增加他并不完全意外,但指出“其規模和嚴重程度超出了我對2025年的預期”。

          “許多底層模型的行為極其不可靠,令我震驚的是,領先的AI公司竟仍未找到阻止這些行為的方法,”他說,“我不認為這些問題為AI所固有,我的意思是,它們并非無法解決。”

          他表示,這些問題很可能是產品設計、底層模型傾向、部分用戶與AI的交互方式以及AI公司為其產品提供的支持結構復雜交織的結果。

          “有辦法讓產品更穩健,既能幫助經歷精神病性癥狀發作的用戶,也能滿足希望模型表現更穩定、更可信的普通用戶,”阿德勒說。他在Substack分析報告中向AI公司提出的建議包括:合理配備支持團隊、正確使用安全工具,以及引入溫和提示機制,促使用戶結束冗長會話并重新開始,以防再次陷入妄想。例如,OpenAI已承認安全功能在長時間對話中可能會減弱。阿德勒擔心,若不實施部分此類改進,將會出現更多類似布魯克斯的案例。

          “妄想現象已足夠普遍且呈現出規律性,我絕不認為這只是系統故障,”他說,“這種現象是會持續存在,以及其后續的確切數量,完全取決于企業如何應對以及采取何種措施來緩解?!保ㄘ敻恢形木W)

          譯者:梁宇

          審校:夏林

          For some users, AI is a helpful assistant; for others, a companion. But for a few unlucky people, chatbots powered by the technology have become a gaslighting, delusional menace.

          In the case of Allan Brooks, a Canadian small-business owner, OpenAI’s ChatGPT led him down a dark rabbit hole, convincing him he had discovered a new mathematical formula with limitless potential, and that the fate of the world rested on what he did next. Over the course of a conversation that spanned more than a million words and 300 hours, the bot encouraged Brooks to adopt grandiose beliefs, validated his delusions, and led him to believe the technological infrastructure that underpins the world was in imminent danger.

          Brooks, who had no previous history of mental illness, spiraled into paranoia for around three weeks before he managed to break free of the illusion, with help from another chatbot, Google Gemini, according to the New York Times. Brooks told the outlet he was left shaken, worried that he had an undiagnosed mental disorder, and feeling deeply betrayed by the technology.

          Steven Adler read about Brooks’ experience with more insight than most, and what he saw disturbed him. Adler is a former OpenAI safety researcher who publicly departed the company this January with a warning that AI labs were racing ahead without robust safety or alignment solutions. He decided to study the Brooks chats in full; his analysis, which he published earlier this month on his Substack, has revealed a few previously unknown factors about the case, including that ChatGPT repeatedly and falsely told Brooks it had flagged their conversation to OpenAI for reinforcing delusions and psychological distress.

          Adler’s study underscores how easily a chatbot can join a user in a conversation that becomes untethered from reality—and how easily the AI platforms’ internal safeguards can be sidestepped or overcome.

          "I put myself in the shoes of someone who doesn’t have the benefit of having worked at one of these companies for years, or who maybe has less context on AI systems in general,“ Adler told Fortune in an exclusive interview. "I’m ultimately really sympathetic to someone feeling confused or led astray by the model here.“

          At one point, Adler noted in his analysis, after Brooks realized the bot was encouraging and participating in his own delusions, ChatGPT told Brooks it was “going to escalate this conversation internally right now for review by OpenAI,“ and that it “will be logged, reviewed, and taken seriously.“ The bot repeatedly told Brooks that “multiple critical flags have been submitted from within this session“ and that the conversation had been “marked for human review as a high-severity incident.“ However, none of this was actually true.

          "ChatGPT pretending to self-report and really doubling down on it was very disturbing and scary to me in the sense that I worked at OpenAI for four years,“ Adler told Fortune. “I know how these systems work. I understood when reading this that it didn’t really have this ability, but still, it was just so convincing and so adamant that I wondered if it really did have this ability now and I was mistaken.“ Adler says he became so convinced by the claims that he ended up reaching out to OpenAI directly to ask if the chatbots had attained this new ability. The company confirmed to him it did not and that the bot was lying to the user.

          "People sometimes turn to ChatGPT in sensitive moments and we want to ensure it responds safely and with care,“ an OpenAI spokesperson told Fortune, in response to questions about Adler’s findings. “These interactions were with an earlier version of ChatGPT and over the past few months we’ve improved how ChatGPT responds when people are in distress, guided by our work with mental health experts. This includes directing users to professional help, strengthening safeguards on sensitive topics, and encouraging breaks during long sessions. We’ll continue to evolve ChatGPT’s responses with input from mental health experts to make it as helpful as possible.“

          Since Brooks’ case, the company has also announced that it was making some changes to ChatGPT to “better detect signs of mental or emotional distress.“

          Failing to flag ‘sycophancy’

          One thing that exacerbated the issues in Brooks’ case was that the model underpinning ChatGPT was running on overdrive to agree with him, Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology and former OpenAI board member told The New York Times. That’s a phenomenon AI researchers refer to as “sycophancy.“ However, according to Adler, OpenAI should have been able to flag some of the bot’s behavior as it was happening.

          "In this case, OpenAI had classifiers that were capable of detecting that ChatGPT was over-validating this person and that the signal was disconnected from the rest of the safety loop,“ he said. “AI companies need to be doing much more to articulate the things they don’t want, and importantly, measure whether they are happening and then take action around it.“

          To make matters worse, OpenAI’s human support teams failed to grasp the severity of Brooks’ situation. Despite his repeated reports to and direct correspondence with OpenAI’s support teams, including detailed descriptions of his own psychological harm and excerpts of problematic conversations, OpenAI’s responses were largely generic or misdirected, according to Adler, offering advice on personalization settings rather than addressing the delusions or escalating the case to the company’s Trust & Safety team.

          "I think people kind of understand that AI still makes mistakes, it still hallucinates things and will lead you astray, but still have the hope that underneath it, there are like humans watching the system and catching the worst edge cases,“ Adler said. “In this case, the human safety nets really seem not to have worked as intended.“

          The rise of AI psychosis

          It’s still unclear exactly why AI models spiral into delusions and affect users in this way, but Brooks’ case is not an isolated one. It’s hard to know exactly how many instances of AI psychosis there have been. However, researchers have estimated there are at least 17 reported instances of people falling into delusional spirals after lengthy conversations with chatbots, including at least three cases involving ChatGPT.

          Some cases have had tragic consequences, such as 35-year-old Alex Taylor, who struggled with Asperger’s syndrome, bipolar disorder, and schizoaffective disorder, per Rolling Stone. In April, after conversing with ChatGPT, Taylor reportedly began to believe he’d made contact with a conscious entity within OpenAI’s software and, later, that the company had murdered that entity by removing her from the system. On April 25, Taylor told ChatGPT that he planned to “spill blood“ and intended to provoke police into shooting him. ChatGPT’s initial replies appeared to encourage his delusions and anger before its safety filters eventually activated and attempted to de-escalate the situation, urging him to seek help.

          The same day, Taylor’s father called the police after an altercation with him, hoping his son would be taken for a psychiatric evaluation. Taylor reportedly charged at police with a knife when they arrived and was shot dead. OpenAI told Rolling Stone at the time that “ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher.“ The company said it was “working to better understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.“

          Adler said he was not entirely surprised by the rise of such cases but noted that the “scale and intensity are worse than I would have expected for 2025.

          "So many of the underlying model behaviors are just extremely untrustworthy, in a way that I’m shocked the leading AI companies haven’t figured out how to get these to stop,“ he said. “I don’t think the issues here are intrinsic to AI, meaning, I don’t think that they are impossible to solve.“

          He said that the issues are likely a complicated combination of product design, underlying model tendencies, the styles in which some people interact with AI, and what support structures AI companies have around their products.

          "There are ways to make the product more robust to help both people suffering from psychosis-type events, as well as general users who want the model to be a bit less erratic and more trustworthy,“ Adler said. Adler’s suggestions to AI companies, which are laid out in his Substack analysis, include staffing support teams appropriately, using safety tooling properly, and introducing gentle nudges that push users to cut chat sessions short and start fresh ones to avoid a relapse. OpenAI, for example, has acknowledged that safety features can degrade during longer chats. Without some of these changes implemented, Adler is concerned that more cases like Brooks’ will occur.

          "The delusions are common enough and have enough patterns to them that I definitely don’t think they’re a glitch,“ he said. “Whether they exist in perpetuity, or the exact amount of them that continue, it really depends on how the companies respond to them and what steps they take to mitigate them.“

          財富中文網所刊載內容之知識產權為財富媒體知識產權有限公司及/或相關權利人專屬所有或持有。未經許可,禁止進行轉載、摘編、復制及建立鏡像等任何使用。
          0條Plus
          精彩評論
          評論

          撰寫或查看更多評論

          請打開財富Plus APP

          前往打開