Register and share your invite link to earn from video plays and referrals.

Daniel Han
@danielhanchen
Building @UnslothAI. Faster RL / training. LLMs bug hunter. OSS package YC S24. Prev ML at NVIDIA. Hyperlearn used by NASA.
2K Following    32.7K Followers
We released experimental MTP Qwen3.6 Unsloth GGUFs! Qwen3.6 27B MTP now runs at 140 tokens/s. Qwen3.6 35B-A3B MTP gets 220 tokens/s generation on a single GPU. Qwen3.6 27B and 35B-A3B have >1.4x speed-up over the original GGUFs without any change in accuracy. Guide + GGUFs + Benchmarks: In terms of average speedup, we see a 1.4x for dense models at draft tokens = 2 and for the MoE around 1.15 to 1.2x. We do not recommend more than 2 draft tokens because the acceptance rate drops precipitously from 83% to 50% with 4 draft tokens, and the forward passes for MTP become less beneficial. Use `--spec-type mtp --spec-draft-n-max 2` Thanks to Aman for
Show more
0
60
764
113
Forward to community