日韩中文字幕在线一区二区三区,亚洲热视频在线观看,久久精品午夜一区二区福利,精品一区二区三区在线观看l,麻花传媒剧电影,亚洲香蕉伊综合在人在线,免费av一区二区三区在线,亚洲成在线人视频观看
          首頁 500強 活動 榜單 商業 科技 商潮 專題 品牌中心
          雜志訂閱

          又一家中國人工智能公司欲加入全球頂級模型行列

          該公司表示,僅花費53.47萬美元租用數據中心計算資源用于訓練其大模型。

          文本設置
          小號
          默認
          大號
          Plus(0條)

          圖片來源:AI-generated photo illustration created using OpenAI’s ChatGPT

          中國人工智能公司MiniMax推出全新人工智能模型M1,稱其性能可與OpenAI、Anthropic和谷歌DeepMind等實驗室的頂級模型相抗衡,而訓練成本卻僅為后者的一小部分,且運行成本也更低。

          此類情形早已屢見不鮮:每隔數月,一家在美國籍籍無名的中國人工智能實驗室便會發布一款人工智能模型,顛覆人們對訓練與運行前沿人工智能所需成本的傳統認知。

          今年1月,深度求索(DeepSeek)的R1模型引發全球轟動;3月,一家名為蝴蝶效應科技(Butterfly Effect,注冊地在新加坡,但團隊大部分成員在中國)的初創公司及其“代理人工智能”模型Manus曾短暫成為焦點;本周,總部位于上海的初創公司MiniMax憑借6月16日推出的M1模型成為人工智能行業熱議的焦點——此前,該公司因發布人工智能生成的視頻游戲而聞名。

          根據MiniMax公布的數據,M1模型在智能和創造力方面可與OpenAI、Anthropic和深度求索的頂尖模型相抗衡,然而其訓練和運行成本卻低得驚人。

          該公司表示,僅花費53.47萬美元租用數據中心計算資源用于訓練M1,這比ChatGPT-4o的訓練成本預估值低近200倍。行業專家稱,ChatGPT-4o的訓練成本可能超過1億美元(OpenAI尚未公布其訓練成本數據)。

          如果這一數據準確無誤(MiniMax的說法尚未得到獨立驗證),那么那些向OpenAI和Anthropic等私有大型語言模型制造商投入數千億美元的藍籌股投資者,以及微軟和谷歌的股東,很可能會因此感到不安。這是因為人工智能業務目前處于嚴重虧損狀態;據科技媒體《The Information》10月的一份報告稱,行業領軍企業OpenAI預計將在2026年虧損140億美元,而且可能要到2028年才能實現收支平衡。該報告的分析基于OpenAI與投資者共享的財務文件。

          如果客戶能夠通過使用MiniMax的開源人工智能模型獲得與OpenAI模型相同的性能表現,這可能會削弱市場對OpenAI產品的需求。OpenAI已在大幅降低其最強大模型的定價以穩固市場份額。最近,它將o3推理模型的使用成本削減了80%,而這還是在MiniMax發布M1之前。

          MiniMax的報告結果還意味著,企業在運行這些模型時可能無需投入過多計算成本,此情況可能會波及亞馬遜AWS、微軟Azure和谷歌云平臺等云服務提供商的利潤。同時,這可能導致對英偉達芯片的需求減少,而英偉達芯片是人工智能數據中心的核心硬件。

          MiniMax的M1最終產生的影響可能與今年早些時候深度求索(總部位于杭州)發布其R1大型語言模型時的情況類似。當時,深度求索宣稱R1的性能與ChatGPT相當,但訓練成本僅為ChatGPT的一小部分,這一聲明導致英偉達股價單日下跌17%,市值蒸發約6000億美元。截至目前,MiniMax的消息尚未引發類似波動。本周英偉達股價跌幅不到0.5%,不過,如果MiniMax的M1能像深度求索的R1模型那樣得到廣泛應用,情況可能會發生變化。

          MiniMax關于M1的聲明尚未得到驗證

          不同之處在于,獨立開發者尚未證實MiniMax關于M1的聲明。以深度求索的R1為例,開發者迅速確認該模型性能確實如公司所宣稱的那般出色;而蝴蝶效應科技的Manus模型在開發者測試中暴露出易出錯的缺陷,無法達到公司演示的效果,初期熱度迅速消退。未來幾天將成為關鍵節點——開發者是接納M1,還是反應冷淡,屆時自會見分曉。

          MiniMax背后有騰訊、阿里巴巴等中國頭部科技公司支持。目前尚不清楚該公司員工規模,其首席執行官閆俊杰的公開信息也極為有限。除了MiniMax Chat外,該公司還推出了圖像生成工具Hailuo AI和虛擬形象應用Talkie。據MiniMax稱,這些產品在200多個國家和地區擁有數千萬用戶,以及5萬家企業客戶,其中許多企業被Hailuo AI能夠即時生成視頻游戲的能力所吸引。

          然而,沒有什么比免費試用更能吸引客戶。目前,想要試用MiniMax M1的用戶可通過其運行的API免費試用,開發者還能免費下載整個模型并在自有計算資源上運行(不過在這種情況下,開發者需自行承擔計算時間費用)。如果MiniMax的能力如該公司所宣稱的那樣,無疑會收獲一定的關注度。

          M1的另一大核心賣點在于其具備100萬令牌的“上下文窗口”。令牌是數據單元,大致相當于四分之三單詞的文本量,上下文窗口指模型生成單次回應時可使用的數據上限。100萬令牌大致相當于七到八本書或一小時的視頻內容——這意味著M1能處理的數據量超過部分頂尖模型:例如,OpenAI的o3和Anthropic的Claude Opus 4的上下文窗口僅約20萬令牌。不過,Gemini 2.5 Pro同樣擁有100萬令牌的上下文窗口,而Meta的部分開源Llama模型上下文窗口甚至可達1000萬令牌。

          一位X用戶寫道:“MiniMax M1太瘋狂了!”他聲稱自己在毫無編程基礎的情況下,僅用60秒就生成了一個網飛(Netflix)克隆版——包括電影預告片、實時網站以及“完美響應式設計”。 (財富中文網)

          譯者:中慧言-王芳

          中國人工智能公司MiniMax推出全新人工智能模型M1,稱其性能可與OpenAI、Anthropic和谷歌DeepMind等實驗室的頂級模型相抗衡,而訓練成本卻僅為后者的一小部分,且運行成本也更低。

          此類情形早已屢見不鮮:每隔數月,一家在美國籍籍無名的中國人工智能實驗室便會發布一款人工智能模型,顛覆人們對訓練與運行前沿人工智能所需成本的傳統認知。

          今年1月,深度求索(DeepSeek)的R1模型引發全球轟動;3月,一家名為蝴蝶效應科技(Butterfly Effect,注冊地在新加坡,但團隊大部分成員在中國)的初創公司及其“代理人工智能”模型Manus曾短暫成為焦點;本周,總部位于上海的初創公司MiniMax憑借6月16日推出的M1模型成為人工智能行業熱議的焦點——此前,該公司因發布人工智能生成的視頻游戲而聞名。

          根據MiniMax公布的數據,M1模型在智能和創造力方面可與OpenAI、Anthropic和深度求索的頂尖模型相抗衡,然而其訓練和運行成本卻低得驚人。

          該公司表示,僅花費53.47萬美元租用數據中心計算資源用于訓練M1,這比ChatGPT-4o的訓練成本預估值低近200倍。行業專家稱,ChatGPT-4o的訓練成本可能超過1億美元(OpenAI尚未公布其訓練成本數據)。

          如果這一數據準確無誤(MiniMax的說法尚未得到獨立驗證),那么那些向OpenAI和Anthropic等私有大型語言模型制造商投入數千億美元的藍籌股投資者,以及微軟和谷歌的股東,很可能會因此感到不安。這是因為人工智能業務目前處于嚴重虧損狀態;據科技媒體《The Information》10月的一份報告稱,行業領軍企業OpenAI預計將在2026年虧損140億美元,而且可能要到2028年才能實現收支平衡。該報告的分析基于OpenAI與投資者共享的財務文件。

          如果客戶能夠通過使用MiniMax的開源人工智能模型獲得與OpenAI模型相同的性能表現,這可能會削弱市場對OpenAI產品的需求。OpenAI已在大幅降低其最強大模型的定價以穩固市場份額。最近,它將o3推理模型的使用成本削減了80%,而這還是在MiniMax發布M1之前。

          MiniMax的報告結果還意味著,企業在運行這些模型時可能無需投入過多計算成本,此情況可能會波及亞馬遜AWS、微軟Azure和谷歌云平臺等云服務提供商的利潤。同時,這可能導致對英偉達芯片的需求減少,而英偉達芯片是人工智能數據中心的核心硬件。

          MiniMax的M1最終產生的影響可能與今年早些時候深度求索(總部位于杭州)發布其R1大型語言模型時的情況類似。當時,深度求索宣稱R1的性能與ChatGPT相當,但訓練成本僅為ChatGPT的一小部分,這一聲明導致英偉達股價單日下跌17%,市值蒸發約6000億美元。截至目前,MiniMax的消息尚未引發類似波動。本周英偉達股價跌幅不到0.5%,不過,如果MiniMax的M1能像深度求索的R1模型那樣得到廣泛應用,情況可能會發生變化。

          MiniMax關于M1的聲明尚未得到驗證

          不同之處在于,獨立開發者尚未證實MiniMax關于M1的聲明。以深度求索的R1為例,開發者迅速確認該模型性能確實如公司所宣稱的那般出色;而蝴蝶效應科技的Manus模型在開發者測試中暴露出易出錯的缺陷,無法達到公司演示的效果,初期熱度迅速消退。未來幾天將成為關鍵節點——開發者是接納M1,還是反應冷淡,屆時自會見分曉。

          MiniMax背后有騰訊、阿里巴巴等中國頭部科技公司支持。目前尚不清楚該公司員工規模,其首席執行官閆俊杰的公開信息也極為有限。除了MiniMax Chat外,該公司還推出了圖像生成工具Hailuo AI和虛擬形象應用Talkie。據MiniMax稱,這些產品在200多個國家和地區擁有數千萬用戶,以及5萬家企業客戶,其中許多企業被Hailuo AI能夠即時生成視頻游戲的能力所吸引。

          然而,沒有什么比免費試用更能吸引客戶。目前,想要試用MiniMax M1的用戶可通過其運行的API免費試用,開發者還能免費下載整個模型并在自有計算資源上運行(不過在這種情況下,開發者需自行承擔計算時間費用)。如果MiniMax的能力如該公司所宣稱的那樣,無疑會收獲一定的關注度。

          M1的另一大核心賣點在于其具備100萬令牌的“上下文窗口”。令牌是數據單元,大致相當于四分之三單詞的文本量,上下文窗口指模型生成單次回應時可使用的數據上限。100萬令牌大致相當于七到八本書或一小時的視頻內容——這意味著M1能處理的數據量超過部分頂尖模型:例如,OpenAI的o3和Anthropic的Claude Opus 4的上下文窗口僅約20萬令牌。不過,Gemini 2.5 Pro同樣擁有100萬令牌的上下文窗口,而Meta的部分開源Llama模型上下文窗口甚至可達1000萬令牌。

          一位X用戶寫道:“MiniMax M1太瘋狂了!”他聲稱自己在毫無編程基礎的情況下,僅用60秒就生成了一個網飛(Netflix)克隆版——包括電影預告片、實時網站以及“完美響應式設計”。 (財富中文網)

          譯者:中慧言-王芳

          It’s becoming a familiar pattern: Every few months, an AI lab in China that most people in the U.S. have never heard of releases an AI model that upends conventional wisdom about the cost of training and running cutting-edge AI.

          In January, it was DeepSeek’s R1 that took the world by storm. Then in March, it was a startup called Butterfly Effect—technically based in Singapore but with most of its team in China—and its “agentic AI” model, Manus, that briefly captured the spotlight. This week, it’s a Shanghai-based upstart called MiniMax, best known previously for releasing AI-generated video games, that is the talk of the AI industry thanks to the M1 model it debuted on June 16.

          According to data published by MiniMax, its M1 is competitive with top models from OpenAI, Anthropic, and DeepSeek when it comes to both intelligence and creativity, but is dirt cheap to train and run.

          The company says it spent just $534,700 renting the data center computing resources needed to train M1. This is nearly 200-fold cheaper than estimates of the training cost of ChatGPT-4o, which, industry experts say, likely exceeded $100 million (OpenAI has not released its training cost figures).

          If accurate—and MiniMax’s claims have yet to be independently verified—this figure will likely cause some agita among blue-chip investors who’ve sunk hundreds of billions into private LLM makers like OpenAI and Anthropic, as well as Microsoft and Google shareholders. This is because the AI business is deeply unprofitable; industry leader OpenAI is likely on track to lose $14 billion in 2026 and is unlikely to break even until 2028, according to an October report from tech publication The Information, which based its analysis on OpenAI financial documents that had been shared with investors.

          If customers can get the same performance as OpenAI’s models by using MiniMax’s open-source AI models, it will likely dent demand for OpenAI’s products. OpenAI has already been aggressively lowering the pricing of its most capable models to retain market share. It recently slashed the cost of using its o3 reasoning model by 80%. And that was before MiniMax’s M1 release.

          MiniMax’s reported results also mean that businesses may not need to spend as much on computing costs to run these models, potentially denting profits for cloud providers such as Amazon’s AWS, Microsoft’s Azure, and Google’s Google Cloud Platform. And it may mean less demand for Nvidia’s chips, which are the workhorses of AI data centers.

          The impact of MiniMax’s M1 may ultimately be similar to what happened when Hangzhou-based DeepSeek released its R1 LLM model earlier this year. DeepSeek claimed that R1 functioned on par with ChatGPT at a fraction of the training cost. DeepSeek’s statement sank Nvidia’s stock by 17% in a single day—erasing about $600 billion in market value. So far, that hasn’t happened with the MiniMax news. Nvidia’s shares have fallen less than 0.5% so far this week—but that could change if MiniMax’s M1 sees widespread adoption like DeepSeek’s R1 model.

          MiniMax’s claims about M1 have not yet been verified

          The difference may be that independent developers have yet to confirm MiniMax’s claims about M1. In the case of DeepSeek’s R1, developers quickly determined that the model’s performance was indeed as good as the company said. With Butterfly Effect’s Manus, however, the initial buzz faded fast after developers testing Manus found that the model seemed error-prone and couldn’t match what the company had demonstrated. The coming days will prove critical in determining whether developers embrace M1 or respond more tepidly.

          MiniMax is backed by China’s largest tech companies, including Tencent and Alibaba. It is unclear how many people work at the company, and there is little public information about its CEO, Yan Junjie. Aside from MiniMax Chat, the company also offers graphic generator Hailuo AI and avatar app Talkie. Through these products, MiniMax claims tens of millions of users across 200 countries and regions as well as 50,000 enterprise clients, a number of whom were drawn to Hailuo for its ability to generate video games on the fly.

          But few things win customers more than free access. Right now, those who want to try MiniMax’s M1 can do so for free through an API MiniMax runs. Developers can also download the entire model for free and run it on their own computing resources (although in that case, the developers have to pay for the compute time). If MiniMax’s capabilities are what the company claims, it will no doubt gain some traction.

          The other big selling point for M1 is that it has a “context window” of 1 million tokens. A token is a chunk of data, equivalent to about three-quarters of one word of text, and a context window is the limit of how much data the model can use to generate a single response. One million tokens is equivalent to about seven or eight books or one hour of video content. The 1 million–token context window for M1 means it can take in more data than some of the top-performing models: OpenAI’s o3 and Anthropic’s Claude Opus 4, for example, both have context windows of only about 200,000 tokens. Gemini 2.5 Pro, however, also has a 1 million–token context window, and some of Meta’s open-source Llama models have context windows of up to 10 million tokens.

          “MiniMax M1 is INSANE!” writes one X user who claims to have made a Netflix clone—complete with movie trailers, a live website, and “perfect responsive design” in 60 seconds with “zero” coding knowledge.

          財富中文網所刊載內容之知識產權為財富媒體知識產權有限公司及/或相關權利人專屬所有或持有。未經許可,禁止進行轉載、摘編、復制及建立鏡像等任何使用。
          0條Plus
          精彩評論
          評論

          撰寫或查看更多評論

          請打開財富Plus APP

          前往打開