Register and share your invite link to earn from video plays and referrals.

Prime Intellect
@PrimeIntellect
The Open Stack for Self-Improving Agents
37 Following    64.1K Followers
We started automating AI research on nanogpt-speedruns & achieved new records >for 2 weeks GPT 5.5 and Opus 4.7 iterated on novel optimizations >10k runs & 14k H200 hours >both agents beat the human baseline >Opus now holds the record at 2930 steps Awesome work @eliebakouch!
Show more
.@eliebakouch let the agents go wild on our idle compute to compete in the nanoGPT speedrun optimizer track!
if you weren’t aware, it’s prime intellect season
we let opus 4.7 and gpt 5.5 run on the nanogpt optimizer speedrun: ~10k runs, 14k H200 hours, 23.9B tokens. opus hits 2930, codex 2950, both beating the human baseline of 2990. we cover claude autonomy failures, codex high compute usage, and much more
Show more
suuuuper excited to be collaborating with the excellent LangChain Labs team on this effort prod agent tracing is the seed that lets you close the loop for continual learning. too much data gets collected but not used for learning. time to change that :)
Show more
We are excited to be partnering with @LangChain for deploying self-improving agents. Continual learning in your production environment unlocks compounding capability gains for model-product optimization. Your data. Your advantage.
Show more
Push an open-weight agent model as far as you can RL and fine-tune Laguna XS.2, Poolside's latest-generation model on Lab. 2-day model research hackathon in London (May 29–30) @poolsideai x @nvidia x @huggingface x @PrimeIntellect
Show more
Poolside is hosting a 2-day model research hackathon in London. Join us to push an open-weight agent model as far as you can. RL and fine-tune Laguna XS.2, our latest-generation model, on Prime Intellect Lab. Dates: May 29–30 Partners: @nvidia + @PrimeIntellect + @huggingface Prize: NVIDIA DGX Spark Agents need better models. Better models need cracked researchers. Link below.
Show more
Will be giving a talk titled “You should do RL for long-running agents (and use RLMs)” at 4pm on Sat at AI Engineer Singapore. Excited to see you all!
prime intellect 🤝 poolside come hang and train your own model :) v excited to support this, Laguna XS.2 is a really great base for custom agents you can run locally
Poolside is hosting a 2-day model research hackathon in London. Join us to push an open-weight agent model as far as you can. RL and fine-tune Laguna XS.2, our latest-generation model, on Prime Intellect Lab. Dates: May 29–30 Partners: @nvidia + @PrimeIntellect + @huggingface Prize: NVIDIA DGX Spark Agents need better models. Better models need cracked researchers. Link below.
Show more
Push an open-weight agent model as far as you can RL and fine-tune Laguna XS.2, Poolside's latest-generation model on Lab. 2-day model research hackathon in London (May 29–30) @poolsideai x @nvidia x @huggingface x @PrimeIntellect
Show more
Poolside is hosting a 2-day model research hackathon in London. Join us to push an open-weight agent model as far as you can. RL and fine-tune Laguna XS.2, our latest-generation model, on Prime Intellect Lab. Dates: May 29–30 Partners: @nvidia + @PrimeIntellect + @huggingface Prize: NVIDIA DGX Spark Agents need better models. Better models need cracked researchers. Link below.
Show more
Applied Research Hackathon. We’re sponsoring compute. @PrimeIntellect’s excellent stack will be there to support RL and evals. Excited to see what people will build.
Introducing Renderers RL trainers work in tokens. Environments work in messages. Going back and forth corrupts sampled tokens, wasting compute on every agentic turn. With Renderers, we fix this mismatch. This unlocks >3x throughput on popular open models.
Show more