Register and share your invite link to earn from video plays and referrals.

IPO Newsroom
@IPONewsroom_
Your IPO newsroom. S-1 filings, pricing updates, debut coverage, and post-IPO performance.
Joined November 2024
99 Following    20.3K Followers
CEREBRAS $CBRS IS ABOUT TO GO PUBLIC ... HERE IS A DEEP DIVE ON WHAT THEY ACTUALLY DO Most people have no idea what this company actually does. Here is the plain-English version: THE BIG IDEA Every AI model you have ever used (ChatGPT, Claude, Gemini, Llama) is trained on chips. The dominant company is NVIDIA $NVDA, which sells lots of small chips that get wired together into clusters by the thousands. Cerebras went the opposite direction. They build ONE giant chip the size of a dinner plate. Not exaggerating. Their flagship "Wafer-Scale Engine" (WSE-3) is 46,225 square millimeters of silicon. A normal AI chip is smaller than a postage stamp. THE NUMBERS On that one giant chip: - 4 trillion transistors (a top-end NVIDIA GPU has roughly 80 billion) - 900,000 AI cores - 44 GB of on-chip memory - 125 petaflops of compute power - Built on TSMC's 5nm process WHY THAT MATTERS When you wire thousands of small chips together, the wires become the bottleneck. Data has to travel between chips constantly. That eats time, power, and money. Cerebras keeps everything on one piece of silicon. No cables between chips. No slow networking. Just one giant brain. The pitch: faster training, faster inference, fewer engineers needed to manage cluster bottlenecks. WHAT THEY SELL Three ways to use Cerebras: - Buy the system outright (the "CS-3" is the box that holds the chip) - Rent compute via Cerebras Cloud - Dedicated capacity contracts for big customers They have 6 new AI inference data centers coming online across North America and Europe. WHO ACTUALLY USES IT The customer list is the validation: - OpenAI: $20B+ committed over three years - Meta $META: powers the Llama API for developers - Perplexity: runs its Sonar search model on Cerebras - Mistral: the French AI lab runs Le Chat on Cerebras - Mayo Clinic: trains genomic AI models on Cerebras infrastructure - GSK $GSK: trains biological language models - Argonne National Lab: has used Cerebras hardware since 2019 - AWS: hosts Cerebras chips inside Amazon data centers, accessed through Bedrock - US Department of Energy: signed an MOU for the Genesis Mission THE TRADE-OFF Cerebras is small compared to NVIDIA. 2025 revenue: $510M. 2025 operating loss: $146M. Concentration is the risk most coverage will not flag: - G42 (the UAE conglomerate) was 85% of 2024 revenue per Reuters - G42 plus MBZUAI (the Abu Dhabi AI university) were 86% of 2025 revenue per FT - The OpenAI deal is the big bet to diversify away from that concentration THE STORY IN ONE LINE NVIDIA bet that the future of AI is millions of small chips working together. Cerebras bet on one giant chip doing the work in one place. The market just decided their bet is worth nearly twice what they priced it at.
Show more