Qwen 3.5 Launches: Hybrid Architecture Meets Open-Weight Power

2/19/2026
The landscape of open-source artificial intelligence has just witnessed a monumental shift. Alibaba Cloud has officially released Qwen 3.5, introducing the first model in the series, Qwen3.5-397B-A17B, with open weights. This release is not merely an incremental update; it represents a fundamental rethinking of model architecture, balancing massive scale with extreme efficiency. https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3.5/Figures/qwen3.5_397b_a17b_score.png The Efficiency Paradox: Solved At the heart of Qwen 3.5 lies an innovative hybrid architecture. While the model boasts a staggering 397 billion total parameters, it utilizes a fusion of linear attention (via Gated Delta Networks) and a Sparse Mixture-of-Experts (MoE). This design allows it to activate only 17 billion parameters per forward pass. The result? A model that possesses the deep knowledge reservoir of a giant but operates with the speed and cost-effectiveness of a much smaller system. https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3.5/Figures/qwen3.5_397b_a17b_scaling.png Benchmark Dominance In direct comparison with frontier models, Qwen 3.5 demonstrates exceptional capability. According to the released evaluation data, it achieves a score of 90.8 on OmniDocBench v1.5 (document recognition & understanding), surpassing industry heavyweights like GPT-5.2 (85.7) and Claude Opus 4.5 (87.7). Its prowess extends to long-context tasks as well; on LongBench v2, it scored 63.2, significantly outperforming GPT-5.2, which scored 54.5. Whether it's coding, reasoning, or complex agentic workflows, Qwen 3.5 is proving to be a formidable competitor. https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3.5/Figures/qwen3.5_397b_a17b_inference.png A Truly Global and Multimodal Agent Qwen 3.5 is designed as a "native multimodal agent". It has expanded its linguistic capabilities from 119 to 201 languages and dialects, making it one of the most culturally inclusive models available. On the agentic front, specifically in the BrowseComp benchmark, the model achieved a score of 78.6 using advanced context strategies, positioning it neck-and-neck with proprietary top-tier models. https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3.5/Figures/qwen3.5_397b_a17b_infra.jpg For enterprise users, the hosted Qwen3.5-Plus model on Alibaba Cloud Model Studio offers a default 1 million token context window and adaptive tool use. With high scores in STEM benchmarks like MMLU-Pro (87.8) and coding evaluations like SWE-bench Verified (76.4), Qwen 3.5 is set to empower developers to build smarter, faster, and more efficient AI applications.