Register and share your invite link to earn from video plays and referrals.

Gradient
@Gradient_HQ
Open infrastructure for open intelligence. Lattica · Parallax · Echo
Joined May 2024
73 Following    722.9K Followers
When you scale parallel agents, prompt updates degrade fast. The more trajectories you process concurrently, the more generic your learned prompts become. Our researchers worked with @lihanc02 and team on Combee to rethink how aggregation works at scale. Results held up across GEPA and ACE even past 80 concurrent agents. Read more on this research👇
Show more
Prompt Learning does not scale for parallel agents. More parallel agents 🤖 = worse prompts 😭 Why? Processing too many trajectories concurrently damages the prompt update process 🐝 We fix this with Combee : → preserves high-quality learnt system prompt → scales to more than 80 concurrent agents → up to 17× speedup without quality drop on top of ACE and GEPA 🥽Use Cases: 1. Prompt learning on large scale collected agent traces 2. Parallel agent learning online with fast knowledge sharing Read more below to learn how agents actually learn at scale ⬇️
Show more