README.md
7.0 KB · 192 lines · markdown Raw
1 ---
2 license: apache-2.0
3 pipeline_tag: image-text-to-text
4 library_name: transformers
5 ---
6 <a href="https://huggingface.co/spaces/akhaliq/Qwen3-VL-2B-Instruct" target="_blank" style="margin: 2px;">
7 <img alt="Demo" src="https://img.shields.io/badge/Demo-536af5" style="display: inline-block; vertical-align: middle;"/>
8 </a>
9
10
11 # Qwen3-VL-2B-Instruct
12
13
14 Meet Qwen3-VL — the most powerful vision-language model in the Qwen series to date.
15
16 This generation delivers comprehensive upgrades across the board: superior text understanding & generation, deeper visual perception & reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities.
17
18 Available in Dense and MoE architectures that scale from edge to cloud, with Instruct and reasoning‑enhanced Thinking editions for flexible, on‑demand deployment.
19
20
21 #### Key Enhancements:
22
23 * **Visual Agent**: Operates PC/mobile GUIs—recognizes elements, understands functions, invokes tools, completes tasks.
24
25 * **Visual Coding Boost**: Generates Draw.io/HTML/CSS/JS from images/videos.
26
27 * **Advanced Spatial Perception**: Judges object positions, viewpoints, and occlusions; provides stronger 2D grounding and enables 3D grounding for spatial reasoning and embodied AI.
28
29 * **Long Context & Video Understanding**: Native 256K context, expandable to 1M; handles books and hours-long video with full recall and second-level indexing.
30
31 * **Enhanced Multimodal Reasoning**: Excels in STEM/Math—causal analysis and logical, evidence-based answers.
32
33 * **Upgraded Visual Recognition**: Broader, higher-quality pretraining is able to “recognize everything”—celebrities, anime, products, landmarks, flora/fauna, etc.
34
35 * **Expanded OCR**: Supports 32 languages (up from 19); robust in low light, blur, and tilt; better with rare/ancient characters and jargon; improved long-document structure parsing.
36
37 * **Text Understanding on par with pure LLMs**: Seamless text–vision fusion for lossless, unified comprehension.
38
39
40 #### Model Architecture Updates:
41
42 <p align="center">
43 <img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_arc.jpg" width="80%"/>
44 <p>
45
46
47 1. **Interleaved-MRoPE**: Full‑frequency allocation over time, width, and height via robust positional embeddings, enhancing long‑horizon video reasoning.
48
49 2. **DeepStack**: Fuses multi‑level ViT features to capture fine‑grained details and sharpen image–text alignment.
50
51 3. **Text–Timestamp Alignment:** Moves beyond T‑RoPE to precise, timestamp‑grounded event localization for stronger video temporal modeling.
52
53 This is the weight repository for Qwen3-VL-2B-Instruct.
54
55
56 ---
57
58 ## Model Performance
59
60 **Multimodal performance**
61
62 ![](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_2b_32b_vl_instruct.jpg)
63
64 **Pure text performance**
65 ![](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_2b_32b_text_instruct.jpg)
66
67 ## Quickstart
68
69 Below, we provide simple examples to show how to use Qwen3-VL with 🤖 ModelScope and 🤗 Transformers.
70
71 The code of Qwen3-VL has been in the latest Hugging Face transformers and we advise you to build from source with command:
72 ```
73 pip install git+https://github.com/huggingface/transformers
74 # pip install transformers==4.57.0 # currently, V4.57.0 is not released
75 ```
76
77 ### Using 🤗 Transformers to Chat
78
79 Here we show a code snippet to show how to use the chat model with `transformers`:
80
81 ```python
82 from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
83
84 # default: Load the model on the available device(s)
85 model = Qwen3VLForConditionalGeneration.from_pretrained(
86 "Qwen/Qwen3-VL-2B-Instruct", dtype="auto", device_map="auto"
87 )
88
89 # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
90 # model = Qwen3VLForConditionalGeneration.from_pretrained(
91 # "Qwen/Qwen3-VL-2B-Instruct",
92 # dtype=torch.bfloat16,
93 # attn_implementation="flash_attention_2",
94 # device_map="auto",
95 # )
96
97 processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-2B-Instruct")
98
99 messages = [
100 {
101 "role": "user",
102 "content": [
103 {
104 "type": "image",
105 "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
106 },
107 {"type": "text", "text": "Describe this image."},
108 ],
109 }
110 ]
111
112 # Preparation for inference
113 inputs = processor.apply_chat_template(
114 messages,
115 tokenize=True,
116 add_generation_prompt=True,
117 return_dict=True,
118 return_tensors="pt"
119 )
120 inputs = inputs.to(model.device)
121
122 # Inference: Generation of the output
123 generated_ids = model.generate(**inputs, max_new_tokens=128)
124 generated_ids_trimmed = [
125 out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
126 ]
127 output_text = processor.batch_decode(
128 generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
129 )
130 print(output_text)
131 ```
132
133 ### Generation Hyperparameters
134 #### VL
135 ```bash
136 export greedy='false'
137 export top_p=0.8
138 export top_k=20
139 export temperature=0.7
140 export repetition_penalty=1.0
141 export presence_penalty=1.5
142 export out_seq_length=16384
143 ```
144
145 #### Text
146 ```bash
147 export greedy='false'
148 export top_p=1.0
149 export top_k=40
150 export repetition_penalty=1.0
151 export presence_penalty=2.0
152 export temperature=1.0
153 export out_seq_length=32768
154 ```
155
156
157 ## Citation
158
159 If you find our work helpful, feel free to give us a cite.
160
161 ```
162 @misc{qwen3technicalreport,
163 title={Qwen3 Technical Report},
164 author={Qwen Team},
165 year={2025},
166 eprint={2505.09388},
167 archivePrefix={arXiv},
168 primaryClass={cs.CL},
169 url={https://arxiv.org/abs/2505.09388},
170 }
171
172 @article{Qwen2.5-VL,
173 title={Qwen2.5-VL Technical Report},
174 author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},
175 journal={arXiv preprint arXiv:2502.13923},
176 year={2025}
177 }
178
179 @article{Qwen2VL,
180 title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
181 author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
182 journal={arXiv preprint arXiv:2409.12191},
183 year={2024}
184 }
185
186 @article{Qwen-VL,
187 title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
188 author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
189 journal={arXiv preprint arXiv:2308.12966},
190 year={2023}
191 }
192 ```