注册并分享邀请链接,可获得视频播放与邀请奖励。

搜索结果 Colossus
Colossus 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Colossus 的推特
Anthropic 租下 SpaceX Colossus 数据中心的全部算力 300 兆瓦,强强联合了 - Claude Code 的 5 小时速率限制对 Pro、Max 和 Team 计划翻倍 - 移除 Pro 和 Max 计划中 Claude Code 的高峰时段限制 的确今天使用Claude快了很多,因为缺算力,本周初 CodeX 的日安装量已经超过了 Claude Code, 非常胶着 马斯克前段时间还在骂 Anthropic, 为了 SpaceX 的上市,转手就合作起来了,上周花了很多时间跟 Anthropic 高层会面 Anthropic 也趁着这次合作,把太空数据中心写进了自己的路线图,市梦率也提升了 「只有共同的利益,没有永远的敌人」
显示更多
We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.
显示更多
Anthropic将使用其 Colossus 1 数据中心的全部算力
We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.
显示更多
Anthropic租下SpaceX/xAI Colossus 1全部算力。22万+ NVIDIA GPU,300+兆瓦,一个月内上线。 Cursor也签了SpaceX/xAI算力协议……SpaceX还想收购Cursor? Anthropic今日变化: > Claude Code限额翻倍 > 高峰限制取消 > Opus API限额提升 与马斯克竞争的公司,现在跑在马斯克的超算上。 离谱。
显示更多
为什么xAI要把数据中心Colossus1租给Authropic?这篇推文应该是分析最到位的,核心逻辑是: xAI目前总共持有大约55 万+个GPU(以H100等效性能为基础),而Colossus1(22 万个)仅占总可用容量的约40%,且是一个混合 H100/H200/GB200 的训练集群。这种混合集群并不是合适训练(不同代际GPU见通信延迟很大),但是非常适合推理(推理需要远没有那么紧密同步的 GPU 间通信)。 恰恰Authropic现在最需要推理算力,而且一家就能把Colossus1的算力全部吃掉。而且Anthropic 作为单一租户占用所有22万个 GPU,多租户下出现的网络交换抖动(意外延迟)消失了。双方的技术弱点最终几乎完美互补。 老马把完全基于 Blackwell 构建的数据中心Colossus 2留给自己,用以训练xAI下一代大模型。 租赁出较旧的、混合代的 Colossus 1。作为一个混合H100/H200/GB200的训练集群,Colossus 1只能实现 11% 的MFU(利用率)。然而,一旦它被移交给单一推理客户,这个资产就转变为一个现金流资产,以大约每 GPU 小时 2.60 美元的价格出租(GPU 类型租赁率的加权平均)。对于 xAI 来说,本来是训练的“地狱集群”,在重新部署用于推理时变成了“金鹅”,每年带来50–60 亿美元的收入。 将这 60 亿美元与 xAI 的损益表对比时,其分量就更清晰了。将 xAI 的 1Q26 净亏损年化,大约每年 60 亿美元的亏损。换句话说,向 Anthropic 租赁 Colossus 1 产生的 50–60 亿美元年度收入,几乎完美对冲了 xAI 的亏损数字 不得不说、这是一次完美的合作:Authropic获得急缺的推理算力; SpaceXAI获得能弥补其AI业务年度亏损的现金流。
显示更多
Why did xAI hand over a 220,000-GPU cluster to Anthropic? The technical backdrop to xAI's decision to hand Colossus 1 over to Anthropic in its entirety is more interesting than it appears. xAI deployed more than 220,000 NVIDIA GPUs at its Colossus 1 data center in Memphis. Of these, roughly 150,000 are estimated to be H100s, 50,000 H200s, and 20,000 GB200s. In other words, three different generations of silicon are mixed together inside a single cluster — a "heterogeneous architecture." For distributed training, however, this configuration is close to a disaster, according to engineers familiar with the setup. In distributed training, 100,000 GPUs must finish a single step simultaneously before the cluster can advance to the next one. Even if the GB200s finish their computation first, the remaining 99,999 chips have to wait for the slower H100s — or for any GPU that has hit a stack-related snag — to catch up. This is known as the straggler effect. The 11% GPU utilization rate (MFU: the share of theoretical FLOPs actually realized) at xAI recently reported by The Information can be read as the numerical fallout of this problem. It stands in stark contrast to the 40%-plus MFU figures achieved by Meta and Google. The problem runs deeper still. As discussed earlier, NVIDIA's NCCL has traditionally been optimized for a ring topology. It works beautifully at the 1,000–10,000 GPU scale, but once you push into the 100,000-unit range, the latency of data traversing the ring once around becomes punishingly long. GPUs need to churn through computations rapidly to keep MFU high, but while they sit waiting endlessly for data to arrive over the network fabric, more than half of the silicon falls into idle. Google sidestepped this bottleneck with its own custom topology (Google's OCS: Apollo/Palomar), but xAI, by my read, has not yet reached that stage. Layer Blackwell's (GB200) "power smoothing" issue on top, and the picture comes into focus. According to Zeeshan Patel, formerly in charge of multimodal pre-training at xAI, Blackwell GPUs draw power so aggressively that the chip itself includes a hardware feature for smoothing power delivery. xAI's existing software stack, however, was optimized for Hopper and does not understand the characteristics of the new hardware; when it imposes irregular loads on the chip, the silicon physically destructs — literally melts. That means the modeling stack must be rewritten from scratch, which in turn means scaling is far harder than most of us imagine. Pulling all of this together points to a single conclusion. xAI judged that training frontier models on Colossus 1 simply was not efficient enough to be worthwhile. It therefore moved its own training workloads wholesale onto Colossus 2, built as a 100% Blackwell homogeneous cluster. Colossus 1, on the other hand — whose mixed architecture is far less crippling for inference, which parallelizes more forgivingly — was leased in its entirety to an Anthropic that desperately needed inference capacity. Many observers point to what looks like a contradiction: Elon Musk poured enormous capital into building Colossus, only to hand the core asset over to a direct competitor in Anthropic. Others read it as xAI capitulating because it is a "middling frontier lab." But these are surface-level reads. Look at the numbers and a different picture emerges. xAI today holds roughly 550,000+ GPUs in total (on an H100-equivalent performance basis), and Colossus 1 (220,000 units) accounts for only about 40% of the total available capacity. Colossus 2 — built entirely on Blackwell — is already operational and continuing to expand. Elon kept the all-Blackwell homogeneous cluster (Colossus 2) for himself and leased out the older, mixed-generation Colossus 1. In other words, he handed the pain of rewriting the stack — the MFU-11% debacle — to Anthropic, while keeping his own focus on training the next generation of models. The real point, then, is this. Elon's objective appears to be positioning ahead of the SpaceXAI IPO at a $1.75 trillion valuation, currently floated for as early as June. The narrative SpaceXAI now needs is that xAI — long the "sore finger" — is not merely a research lab burning cash, but a business with a "neo-cloud" model in the mold of AWS, capable of leasing surplus assets at high yields. From a cost-of-capital perspective, an "AGI cash incinerator" is far less attractive to investors than a "data-center landlord generating cash." As noted above, the most important detail of the Colossus 1 lease is that it is for inference, not training. Unlike training, inference requires far less tightly synchronized inter-GPU communication. Even when the chips are heterogeneous, the workload parcels out cleanly across them in parallel. The straggler effect — the chief weakness of a mixed cluster — is essentially neutralized for inference workloads. Furthermore, with Anthropic occupying all 220,000 GPUs as a single tenant, the network-switch jitter (unanticipated latency) that arises under multi-tenancy disappears. The two sides' technical weaknesses end up complementing each other almost exactly. One insight follows. As a training cluster mixing H100/H200/GB200, Colossus 1 was an asset that could only deliver an MFU of 11%. The moment it was handed over to a single inference customer, however, that asset transformed into a cash-flow asset rented out at roughly $2.60 per GPU-hour (a weighted average of the lease rates across GPU types). For xAI, what was a "cluster from hell" for training has become a "golden goose" minting $5–6 billion in annual revenue when redeployed for inference. Elon's genius, I would argue, lies not in the model but in this asset-rotation structure. The weight of that $6 billion becomes clearer when set against xAI's income statement. Annualizing xAI's 1Q26 net loss yields roughly $6 billion in losses per year. The $5–6 billion in annual revenue generated by leasing Colossus 1 to Anthropic, in other words, almost perfectly hedges xAI's loss figure. This single deal effectively pulls xAI to break-even. Heading into the SpaceXAI IPO, this functions as a core line of financial defense. From a cost-of-capital standpoint, if the image shifts from "research lab burning cash" to "infrastructure tollgate stably printing $6 billion a year," the entire tone of the offering can change. (May 8, 2026, Mirae Asset Securities)
显示更多
0
20
194
27
转发到社区
SpaceX AI 与 Anthropic 签署协议,Anthropic 将获准使用全球最大 AI 超算之一"Colossus 1"。Anthropic 同时表达了与 SpaceX 合作、在太空建设多吉瓦级 AI 数据中心的兴趣。
显示更多
Cypherium测试挖矿现已上线,有96G+显存的欢迎来挑战 GitHub - CypherTroopers/cypher at ecdsa_1.1_test_colossus-Xv2test · GitHub
AI“新王登基”,Anthropic pre-IPO 市值将超 1.2 万亿美元,难怪 Anthropic CEO兴奋的语无伦次,说话都结巴了 ​昨天,马斯克官宣解散xAI并入SpaceX。同时,他把全球最强超算Colossus 1,22万张 GPU全部租给OpenAI死对头Anthropic。一边在法庭要罢免奥特曼,一边给对手送算力,老马这波釜底抽薪绝了。 ​Anthropic算力联盟阵容堪称豪华: 亚马逊:5GW容量协议(2026年底前上线近1GW)+ 250亿美元投资 谷歌+博通:5GW TPU容量(2027年开始上线)+ 400亿美元投资 微软+英伟达:300亿美元 Azure容量 Fluidstack:500亿美元基础设施 SpaceX:22万块GPU,300兆瓦,现在就能用
显示更多
0
33
196
33
转发到社区
马斯克称,xAI不再作为独立公司存在,将更名为SpaceXAI,即SpaceX的AI产品。 稍早前SpaceX与Anthropic同时发布公告,宣布达成算力合作伙伴关系,后者将使用SpaceX孟菲斯Colossus 1数据中心(超过22万块英伟达GPU)的全部算力,以支持Claude Pro和Max用户的服务。
显示更多
@SawyerMerritt xAI will be dissolved as a separate company, so it will just be SpaceXAI, the AI products from SpaceX
0
23
115
3
转发到社区
🔥 重磅!xAI 與 Anthropic 宣布運算大合作! Anthropic 即將取得 xAI Colossus 1 的使用權——這是全球最大且部署最快的 AI 超級電腦,搭載超過 22 萬張 NVIDIA GPU(H100、H200、GB200)! 這波合作將大幅提升 Claude Pro 和 Claude Max 的運算容量,讓更多用戶能暢快使用 🚀 更狂的是,雙方還計畫聯手開發「軌道 AI 運算」,結合 SpaceX 能力打造多吉瓦級的太空基礎設施,直接把 AI 送上太空! AI 基礎設施時代正式開打! #xAI# #Anthropic# #Colossus# #AI# #SpaceX#
显示更多
SpaceXAI will provide @AnthropicAI with access to Colossus 1, one of the world’s largest and fastest-deployed AI supercomputers, to provide additional capacity for Claude →
显示更多
SpaceX收购Cursor,对于双方及行业有什么意义?老马给xAI找到了一个破局支点,SpaceX官方宣布与Cursor达成紧密战略合作:1)Cursor将使用SpaceX/xAI的Colossus超级计算机(相当于百万级H100GPU规模的算力)来训练其新一代代码模型Composer 2.5; 2)双方共同打造“世界上最好的编码和知识工作 AI”。 3)Cursor授予SpaceX收购期权:今年晚些时候,SpaceX可以选择以600亿美元 收购Cursor,或者支付100亿美元作为双方合作工作的费用(类似分手费或合作对价)。 上周才传出来Cursor近期在谈融资,今天Cursor卖给 xAI 了,看来独立融资失败了。 这不是说是已完成的收购,而是“先深度绑定,后可选买断”的结构。之前有传闻 xAI 挖走 Cursor 两位核心高管(工程负责人 Andrew Milich 和产品负责人 Jason Ginsberg),现在合作进一步加深,对于两家意义重大。特别Xai当下联创尽走,模型和coding都大幅度落后,数据中心的使用率很低的大背景下。 1、对 xAI / SpaceX当然老马生态来说: 1)能快速补齐编码能力,xAI的Grok在通用模型上还可以,但在专业代码/编程 Agent 领域此前相对落后(Cursor 是这个赛道的领先产品,拥有优秀的产品、工程师心智份额和开发者分发)。合作后,Grok 能快速变成顶级编码Agent。 2)算力变现 + 止血,xAI 自建了海量 GPU(Colossus 集群),每月运营亏损巨大(此前报道超3亿美元/月)。对外出租算力给 Cursor,是向“云算力服务商”转型的第一步,能带来新收入,同时降低数据中心成本。 3)人才与数据双赢,已挖走关键人才,现在再绑定产品和编码数据,能加速 xAI 内部编程工具开发(据说 Grok Build、Grok CLI 等即将推出)。 4)SpaceX 的火箭/星舰/Starlink 等工程项目高度依赖代码,引入 Cursor 能直接提升内部生产力。 2对Cursor来说 1)解决算力瓶颈:AI 训练最缺的就是 GPU,尤其在当前算力荒下,xAI 的 Colossus 是稀缺资源。Cursor 能用数万块 GPU 训练模型,加速追赶 Anthropic、OpenAI 等对手。 2)高估值退出路径 Cursor 近期融资估值已在 290-500 亿美元区间,600 亿收购期权提供了一个明确的 premium 退出选项(或 100 亿合作费作为保底)。 3)风险对冲 在竞争激烈的 AI 编码赛道(OpenAI Codex、Anthropic Claude Code 等),绑定马斯克生态能获得更多资源和曝光。 3、对于整个AI行业层面来说 1)AI 基础设施竞争加剧,xAI 从“纯模型公司”转向兼具算力输出的角色,潜在竞争对手包括微软 Azure、Google Cloud、CoreWeave 等。 2)人才战与并购战升级,类似此前 OpenAI/Anthropic 等也对 Cursor 的编码数据感兴趣,现在老马抢先一步深度绑定。 3)整个AI从大模型,到算力设施再到编码灯各个领域竞争会更加激励、前面说到xAI联创基本上都走了、xAI近期其实陷入了一个困境。这次明显是老马找到了一个破局之道。重新给xAI在AI领域找到了一个强劲的支撑点。 这应该是一笔典型的马马杠杆交易——用算力换产品、人才和期权,快速补短板,同时为 xAI 找新营收模式。当然最终是否会行使 600 亿收购权,还取决于合作成果、SpaceX IPO 进展以及市场估值变化。目前看,双方已经“绑定得很紧”,最后spacex完全收购curaor的概率不低。
显示更多
SpaceX宣布收购xAI,合并后的公司预计将为每股股票定价约527美元,新公司估值将达到1.25万亿美元。新公司涵盖人工智能、火箭、太空互联网、移动设备直连通信以及全球领先的实时信息及言论平台,也应该是业务最庞杂的高科技企业。老马为什么要把xAI并入SpaceX? 1. 解决xAI极端烧钱问题 这应该是最直接、最迫切的动机,xAI每月现金消耗约10亿美元(训练Grok、建造Colossus超级计算机集群等),估值虽已飙到2000–2300亿美元,但它本质上仍是一家纯烧钱阶段的公司。 SpaceX现金流健康(Starlink已开始大规模盈利 + 发射业务稳定)、估值更高(8000亿–1.5万亿美元级别),成为天然的“输血平台”。 2. 实现“太空超级计算中心”终极愿景 这是这次合并最具想象力的部分,老马之前多次公开讲过:地球上建超大AI数据中心会遇到电力、散热、土地、监管等极限,而太空(轨道)是更好的地方。 Starlink提供全球低延迟组网 SpaceX火箭/回收能力提供发射能力 未来太阳能 + 太空散热几乎无上限 收购xAI后,这不再是两家公司“合作”,而是同一实体内部的战略优先级,决策效率和资源调度完全不同级别。未来版本的Colossus很可能直接建在轨道上。 3. 为SpaceX IPO铺路 + 制造超级叙事 这是老马轻车熟路的资本市场操作,SpaceX原计划2026年IPO,估值目标1–1.5万亿美元。 单纯的“火箭+卫星公司”已经很贵,但加入xAI和Grok后,立刻变成“太空+AI+全球通信+宇宙理解”的超级故事。从叙事上更性感,更有想象力。 4. 进一步整合老马的业务“帝国”,强化控制权 这就是把老马现在分散的主力业务整合在一起,控制力更强,资源协调分配会更有效率。 5. 相对其他路径的最优解 1)让xAI独立继续烧,一级市场融资难度越拉越大(估值越高、市场能承载的资金体量跟不上),再加上大模型公司融资竞争很激烈; 2)让特斯拉收购xAI ,上市公司收购流程复杂,监管也复杂 2)SpaceX收xAI,两家都是老马私人控制、非上市,阻力最小,整合最顺畅。同时在太空数据中心等板块、也能深度合作。 总结一句话: SpaceX收购xAI本质上是“用最强的现金牛+发射能力,去保最烧钱但最有未来想象力的AI大脑”,同时为SpaceX IPO制造史诗级叙事,并为马斯克最终的“太空+AI+意识上传”宏大叙事铺路。 链上美股新玩法,从Bitget开始:
显示更多
0
11
97
17
转发到社区