Qwen: Qwen3.5-Flash
ActiveThe Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the...
Overview
The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the...
Integrations & tooling support
- Tool calling
- Not supported
- Structured outputs
- Not supported
Price vs quality
Not enough data
This model has no benchmark scores recorded yet.
Community ratings
No ratings yet. Be the first to rate Qwen: Qwen3.5-Flash.
Rate Qwen: Qwen3.5-Flash
Sign in to rate and review.
Comments
Sign in to leave a comment.