Register and share your invite link to earn from video plays and referrals.

Search results for Qwen
Qwen community
One keyword maps to one global community path.
Create community
People
Not Found
Tweets including Qwen
Qwen3.6-Plus delivers advanced reasoning and real-world performance, optimized for complex workflows like UI-to-code and data analysis. Its Enhanced API utilizes Context Reuse to minimize redundant reasoning and reduce token consumption.
Show more
0
304
7.9K
436
Forward to community
Qwen 3.6 / 3.5 Plus are experiencing capacity issues. The team at Alibaba is working on a fix. We’re temporarily taking them offline will bring them back once resolved.
Qwen3.6 Plus and 3.5 Plus now available in Go - both strong - 3.5 is 3x cheaper - both support images - zero data retention update to the latest to try
Qwen3.6 Plus and 3.5 Plus now available in Go - both strong - 3.5 is 3x cheaper - both support images - zero data retention update to the latest to try
0
89
2.3K
96
Forward to community
Introducing Qwen3.6-Max-Preview, early preview of our next flagship model ·Improved agentic coding capability over Qwen3.6-Plus ·Stronger world knowledge and instruction following ·Improved real-world agent and knowledge reliability performance Smarter, sharper, still evolving.
Show more
0
152
5.7K
310
Forward to community
Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power! What's new: Outstanding agentic coding Strong reasoning across text & multimodal tasks Supports thinking & non-thinking modes Apache 2.0 Smaller model. Bigger results.
Show more
0
157
4.9K
358
Forward to community
Real-time Qwen3-TTS without vLLM or Triton
We released experimental MTP Qwen3.6 Unsloth GGUFs! Qwen3.6 27B MTP now runs at 140 tokens/s. Qwen3.6 35B-A3B MTP gets 220 tokens/s generation on a single GPU. Qwen3.6 27B and 35B-A3B have >1.4x speed-up over the original GGUFs without any change in accuracy. Guide + GGUFs + Benchmarks: In terms of average speedup, we see a 1.4x for dense models at draft tokens = 2 and for the MoE around 1.15 to 1.2x. We do not recommend more than 2 draft tokens because the acceptance rate drops precipitously from 83% to 50% with 4 draft tokens, and the forward passes for MTP become less beneficial. Use `--spec-type mtp --spec-draft-n-max 2` Thanks to Aman for
Show more
0
60
764
113
Forward to community
Trends has received @Alibaba_Qwen's Happy Horse API credit sponsorship in light of the model's yesterday's debut. In the next 24 hours, whoever shares the most creative campaign idea in the comment area, we will sponsor your proposal and run a platform-wide event with it!
Show more
2.3x faster. Ran @UnslothAI Qwen3.6 MTP variants on a DGX Spark (UD-Q6_K_XL): > 27B → 27B MTP: 8.1 → 18.65 t/s (2.3x faster) > 35B A3B → 35B A3B MTP: 56.91 → 66.52 t/s (+17%) The 27B dense model more than doubled throughput from MTP alone. Free speed is free speed.
Show more
See what we have at Qwen Conference 2026? 1,000 m² immersive exhibition. Four curated tech zones. This is a massive showcase of the entire Qwen ecosystem — from foundation models to agentic infrastructure, from Alibaba Cloud's full-stack AI services to 30+ industry benchmarks delivering real impact. Walk through it all. And while you're here, experience Qwen Cloud — the simplest gateway to access frontier model. Let's see it live, try it live . Scan the QR code. Visit Qwen Cloud and register for Qwen Conference now — and come witness the most immersive AI experience of 2026.
Show more