We’re excited to welcome Mooncake to the PyTorch Ecosystem!
Mooncake is designed to solve the “memory wall” in LLM serving. By integrating Mooncake’s high performance KVCache transfer and storage capabilities with PyTorch native inference engines like SGLang, vLLM, and TensorRT-LLM, it unlocks new levels of throughput and scalability for large language model deployments.
Mooncake enables prefill decode disaggregation, global KVCache reuse, elastic expert parallelism, and serves as a fault tolerant PyTorch distributed backend.
🔗
#
PyTorch# #
OpenSourceAI# #
LLM# #
AIInfrastructure#
Show more
The AI infrastructure race is going deeper into the billions.
$AMZN plans around $200B in 2026 AI capital spending, followed by $MSFT at $190B and $GOOGL near $180–190B.
$META is targeting up to $145B, while $CRWV stands out for how capital-intensive its growth model is relative to expected sales.
$ORCL is also scaling aggressively with ~$50B planned.
Show more
Our AI infrastructure releases have focused on exposing wallet, exchange and onchain functionality through agent-compatible interfaces.
Our repositories now include,
1. Agentic Wallet with TEE-secured signing
2. MCP integrations for AI-native workflows
3. CLI + Skills tooling via Onchain OS
4. Agent Trade Kit components for trading automation
5. Transaction simulation and risk grading before execution
6. Multi-chain support across Ethereum, Solana, X Layer and others
7. x402-compatible payment tooling
8. DEX routing, wallet operations and transaction broadcasting APIs
The current architecture exposes these capabilities through, MCP servers, CLI tooling, Open APIs and installable Skills repositories
This is the vision set by Star to develop AI infrastructure while preserving execution controls around signing, permissions and transaction risk. There is more coming in the near future.
Show more
Can your AI infrastructure run itself?
SuperCloud Software Suite within Data Center Building Block Solutions unifies infrastructure control, automates deployment pipelines, empowers self-service AI tools, and supports GPU cloud operations.
Show more
$NOK — Nokia is rebuilding itself around the AI infrastructure cycle — across optical networking, AI silicon, and AI-RAN.
Key highlights:
— Q1 2026 net sales in optical networking grew 20% YoY. JPMorgan maintained an Overweight rating, pointing to AI and cloud order strength as a potential upside catalyst.
— Launched four AI-workload-optimized DSPs at the March OFC conference, expected to lower customer TCO by up to 70%.
— Partnering with Nvidia and other chip leaders on the AI-RAN alliance, bringing radio access networks onto AI accelerators.
Three structural shifts, one company quietly repositioning.
Trade $NOK and 900+ U.S. and Hong Kong equities with stablecoins on StableStock.
Show more
$GLW — Corning is becoming a critical link in the AI data center optical supply chain — with capital, capacity, and customers converging at once.
Key highlights:
— Q1 2026 EPS of $0.70 came in strong, with optical communications revenue up 36% YoY on robust AI demand.
— Nvidia took a $ 500M stake in Corning, while the company committed to expanding U.S. fiber optic capacity by 50%+ to supply AI data centers.
— Long-term agreements signed with multiple hyperscalers, including a $ 6B order from Meta — locking in Corning's optical positioning across the AI supply chain.
Three converging order streams — fiber, silicon, and cloud — reshaping a 174-year-old industrial company into an AI infrastructure name.
Trade $GLW and 1,000+ U.S. and Hong Kong equities with stablecoins on StableStock.
Show more
An interesting line in Politico’s coverage of the proposed AI executive order, which, at 16 pages, is also much longer than expected. This is still under discussion and not yet finalized, and everything I'm about to write is conjecture, but it appears the administration intends to regulate US open-weight models.
Here are the reasons why this will almost certainly happen in some form.
Open-weight models are currently about nine months behind the frontier. Once the big labs are subjected to pre-release screening, development itself will not slow down, but the release cadence will. At that point, open-weight development will quickly close the gap - much faster than nine months. When those models surpass the big labs, everyone will switch to using open-weight alternatives.
From the administration’s perspective, allowing this option defeats the entire purpose of regulation. If the government is restricting and vetting models beyond a certain capability level, and people can simply switch to open-weight models that are just as capable - and eventually even more capable as the big labs slow down their release schedules under the new rules - then the situation becomes even worse from the government's perspective. They will not allow this to happen.
Second, the big labs themselves have almost certainly been covertly lobbying for open-weight models to be included in any new regulations. Allowing the public to switch to a superior, free alternative would completely destroy their business models, potentially bankrupting them all. Given the enormous scale of current investment in these companies and in AI infrastructure, the broader economy would also suffer "significant disruption".
That leaves China. If the two dynamics above play out, the same pattern repeats: everyone switches to Chinese open-weight models, which now quickly surpass both US closed and open releases. This produces the same consequences for the big labs, and causes the same issues with regulation. The government therefore has only two realistic options: ban Chinese models from use in the West, or negotiate a deal with Xi Jinping to impose identical regulation and pre-release vetting on open-weight models in China.
The first option would mean China pulls ahead and wins the AI race. So the administration will almost certainly pursue the second. Negotiations are likely already underway, because the ideal outcome for the admin would be to announce that China has agreed to similar restrictions to what they are announcing, thereby blunting domestic backlash. China will know it has the US over a barrel and will insist on compromises. Compromises such as lifting all export controls on NVIDIA GPUs.
Show more
New listings dropping Monday, May 18.
The AI stack is about to be tradeable end-to-end. Long or short, with leverage, 24/7.
Four stock perpetual futures spanning the AI infrastructure stack:
Cerebras Systems $CBRS the company with the most efficient AI chip purpose-built for inference. Just IPO'd and now it's getting a perp.
Taiwan Semiconductor $TSM the foundry that fabricates every leading-edge AI chip on earth. Nvidia and Cerebras design. TSM builds.
Nebius Group $NBIS AI-native cloud infrastructure. GPU compute, purpose-built and rentable. The AWS of the AI era.
Bloom Energy $BE on-site power generation so AI data centers don't wait years for the grid. Already deployed with Oracle.
The opening of these markets will begin if liquidity conditions are met, in regions where trading is supported. Perpetual futures are available to retail traders and institutions in select jurisdictions.
Show more
Antier Solutions $3M Funding Round⚡️
📑 About:
@antiersolutions is a blockchain and AI infrastructure development platform for enterprise digital systems.
🤝 Investor:
@GvflLimited (Lead)
Show more
Everyone’s favorite intern
@ROCKST4R took the stage at
@consensus2026’s House of AI to walk through the Cysic AI infrastructure and demonstrate how CyOps works in action.
Special thanks to
@lagrangedev and
@0G_labs for hosting!
Show more