README.md
18.1 KB · 529 lines · markdown Raw
1
2 ---
3 license: apache-2.0
4 language:
5 - en
6 pipeline_tag: image-text-to-text
7 tags:
8 - multimodal
9 library_name: transformers
10 ---
11
12 # Qwen2.5-VL-7B-Instruct
13 <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
14 <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
15 </a>
16
17 ## Introduction
18
19 In the past five months since Qwen2-VL’s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.
20
21 #### Key Enhancements:
22 * **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.
23
24 * **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.
25
26 * **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.
27
28 * **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.
29
30 * **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.
31
32
33 #### Model Architecture Updates:
34
35 * **Dynamic Resolution and Frame Rate Training for Video Understanding**:
36
37 We extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.
38
39 <p align="center">
40 <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL/qwen2.5vl_arc.jpeg" width="80%"/>
41 <p>
42
43
44 * **Streamlined and Efficient Vision Encoder**
45
46 We enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.
47
48
49 We have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).
50
51
52
53 ## Evaluation
54
55 ### Image benchmark
56
57
58 | Benchmark | InternVL2.5-8B | MiniCPM-o 2.6 | GPT-4o-mini | Qwen2-VL-7B |**Qwen2.5-VL-7B** |
59 | :--- | :---: | :---: | :---: | :---: | :---: |
60 | MMMU<sub>val</sub> | 56 | 50.4 | **60**| 54.1 | 58.6|
61 | MMMU-Pro<sub>val</sub> | 34.3 | - | 37.6| 30.5 | 41.0|
62 | DocVQA<sub>test</sub> | 93 | 93 | - | 94.5 | **95.7** |
63 | InfoVQA<sub>test</sub> | 77.6 | - | - |76.5 | **82.6** |
64 | ChartQA<sub>test</sub> | 84.8 | - |- | 83.0 |**87.3** |
65 | TextVQA<sub>val</sub> | 79.1 | 80.1 | -| 84.3 | **84.9**|
66 | OCRBench | 822 | 852 | 785 | 845 | **864** |
67 | CC_OCR | 57.7 | | | 61.6 | **77.8**|
68 | MMStar | 62.8| | |60.7| **63.9**|
69 | MMBench-V1.1-En<sub>test</sub> | 79.4 | 78.0 | 76.0| 80.7 | **82.6** |
70 | MMT-Bench<sub>test</sub> | - | - | - |**63.7** |63.6 |
71 | MMStar | **61.5** | 57.5 | 54.8 | 60.7 |63.9 |
72 | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | 66.9 | 62.0 | **67.1**|
73 | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| 50.6 | **52.9**|
74 | MathVista<sub>testmini</sub> | 58.3 | 60.6 | 52.4 | 58.2 | **68.2**|
75 | MathVision | - | - | - | 16.3 | **25.07** |
76
77 ### Video Benchmarks
78
79 | Benchmark | Qwen2-VL-7B | **Qwen2.5-VL-7B** |
80 | :--- | :---: | :---: |
81 | MVBench | 67.0 | **69.6** |
82 | PerceptionTest<sub>test</sub> | 66.9 | **70.5** |
83 | Video-MME<sub>wo/w subs</sub> | 63.3/69.0 | **65.1**/**71.6** |
84 | LVBench | | 45.3 |
85 | LongVideoBench | | 54.7 |
86 | MMBench-Video | 1.44 | 1.79 |
87 | TempCompass | | 71.7 |
88 | MLVU | | 70.2 |
89 | CharadesSTA/mIoU | 43.6|
90
91 ### Agent benchmark
92 | Benchmarks | Qwen2.5-VL-7B |
93 |-------------------------|---------------|
94 | ScreenSpot | 84.7 |
95 | ScreenSpot Pro | 29.0 |
96 | AITZ_EM | 81.9 |
97 | Android Control High_EM | 60.1 |
98 | Android Control Low_EM | 93.7 |
99 | AndroidWorld_SR | 25.5 |
100 | MobileMiniWob++_SR | 91.4 |
101
102 ## Requirements
103 The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:
104 ```
105 pip install git+https://github.com/huggingface/transformers accelerate
106 ```
107 or you might encounter the following error:
108 ```
109 KeyError: 'qwen2_5_vl'
110 ```
111
112
113 ## Quickstart
114
115 Below, we provide simple examples to show how to use Qwen2.5-VL with 🤖 ModelScope and 🤗 Transformers.
116
117 The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:
118 ```
119 pip install git+https://github.com/huggingface/transformers accelerate
120 ```
121 or you might encounter the following error:
122 ```
123 KeyError: 'qwen2_5_vl'
124 ```
125
126
127 We offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
128
129 ```bash
130 # It's highly recommanded to use `[decord]` feature for faster video loading.
131 pip install qwen-vl-utils[decord]==0.0.8
132 ```
133
134 If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.
135
136 ### Using 🤗 Transformers to Chat
137
138 Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
139
140 ```python
141 from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
142 from qwen_vl_utils import process_vision_info
143
144 # default: Load the model on the available device(s)
145 model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
146 "Qwen/Qwen2.5-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
147 )
148
149 # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
150 # model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
151 # "Qwen/Qwen2.5-VL-7B-Instruct",
152 # torch_dtype=torch.bfloat16,
153 # attn_implementation="flash_attention_2",
154 # device_map="auto",
155 # )
156
157 # default processer
158 processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
159
160 # The default range for the number of visual tokens per image in the model is 4-16384.
161 # You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
162 # min_pixels = 256*28*28
163 # max_pixels = 1280*28*28
164 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
165
166 messages = [
167 {
168 "role": "user",
169 "content": [
170 {
171 "type": "image",
172 "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
173 },
174 {"type": "text", "text": "Describe this image."},
175 ],
176 }
177 ]
178
179 # Preparation for inference
180 text = processor.apply_chat_template(
181 messages, tokenize=False, add_generation_prompt=True
182 )
183 image_inputs, video_inputs = process_vision_info(messages)
184 inputs = processor(
185 text=[text],
186 images=image_inputs,
187 videos=video_inputs,
188 padding=True,
189 return_tensors="pt",
190 )
191 inputs = inputs.to("cuda")
192
193 # Inference: Generation of the output
194 generated_ids = model.generate(**inputs, max_new_tokens=128)
195 generated_ids_trimmed = [
196 out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
197 ]
198 output_text = processor.batch_decode(
199 generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
200 )
201 print(output_text)
202 ```
203 <details>
204 <summary>Multi image inference</summary>
205
206 ```python
207 # Messages containing multiple images and a text query
208 messages = [
209 {
210 "role": "user",
211 "content": [
212 {"type": "image", "image": "file:///path/to/image1.jpg"},
213 {"type": "image", "image": "file:///path/to/image2.jpg"},
214 {"type": "text", "text": "Identify the similarities between these images."},
215 ],
216 }
217 ]
218
219 # Preparation for inference
220 text = processor.apply_chat_template(
221 messages, tokenize=False, add_generation_prompt=True
222 )
223 image_inputs, video_inputs = process_vision_info(messages)
224 inputs = processor(
225 text=[text],
226 images=image_inputs,
227 videos=video_inputs,
228 padding=True,
229 return_tensors="pt",
230 )
231 inputs = inputs.to("cuda")
232
233 # Inference
234 generated_ids = model.generate(**inputs, max_new_tokens=128)
235 generated_ids_trimmed = [
236 out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
237 ]
238 output_text = processor.batch_decode(
239 generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
240 )
241 print(output_text)
242 ```
243 </details>
244
245 <details>
246 <summary>Video inference</summary>
247
248 ```python
249 # Messages containing a images list as a video and a text query
250 messages = [
251 {
252 "role": "user",
253 "content": [
254 {
255 "type": "video",
256 "video": [
257 "file:///path/to/frame1.jpg",
258 "file:///path/to/frame2.jpg",
259 "file:///path/to/frame3.jpg",
260 "file:///path/to/frame4.jpg",
261 ],
262 },
263 {"type": "text", "text": "Describe this video."},
264 ],
265 }
266 ]
267
268 # Messages containing a local video path and a text query
269 messages = [
270 {
271 "role": "user",
272 "content": [
273 {
274 "type": "video",
275 "video": "file:///path/to/video1.mp4",
276 "max_pixels": 360 * 420,
277 "fps": 1.0,
278 },
279 {"type": "text", "text": "Describe this video."},
280 ],
281 }
282 ]
283
284 # Messages containing a video url and a text query
285 messages = [
286 {
287 "role": "user",
288 "content": [
289 {
290 "type": "video",
291 "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4",
292 },
293 {"type": "text", "text": "Describe this video."},
294 ],
295 }
296 ]
297
298 #In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.
299 # Preparation for inference
300 text = processor.apply_chat_template(
301 messages, tokenize=False, add_generation_prompt=True
302 )
303 image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
304 inputs = processor(
305 text=[text],
306 images=image_inputs,
307 videos=video_inputs,
308 fps=fps,
309 padding=True,
310 return_tensors="pt",
311 **video_kwargs,
312 )
313 inputs = inputs.to("cuda")
314
315 # Inference
316 generated_ids = model.generate(**inputs, max_new_tokens=128)
317 generated_ids_trimmed = [
318 out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
319 ]
320 output_text = processor.batch_decode(
321 generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
322 )
323 print(output_text)
324 ```
325
326 Video URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.
327
328 | Backend | HTTP | HTTPS |
329 |-------------|------|-------|
330 | torchvision >= 0.19.0 | ✅ | ✅ |
331 | torchvision < 0.19.0 | ❌ | ❌ |
332 | decord | ✅ | ❌ |
333 </details>
334
335 <details>
336 <summary>Batch inference</summary>
337
338 ```python
339 # Sample messages for batch inference
340 messages1 = [
341 {
342 "role": "user",
343 "content": [
344 {"type": "image", "image": "file:///path/to/image1.jpg"},
345 {"type": "image", "image": "file:///path/to/image2.jpg"},
346 {"type": "text", "text": "What are the common elements in these pictures?"},
347 ],
348 }
349 ]
350 messages2 = [
351 {"role": "system", "content": "You are a helpful assistant."},
352 {"role": "user", "content": "Who are you?"},
353 ]
354 # Combine messages for batch processing
355 messages = [messages1, messages2]
356
357 # Preparation for batch inference
358 texts = [
359 processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
360 for msg in messages
361 ]
362 image_inputs, video_inputs = process_vision_info(messages)
363 inputs = processor(
364 text=texts,
365 images=image_inputs,
366 videos=video_inputs,
367 padding=True,
368 return_tensors="pt",
369 )
370 inputs = inputs.to("cuda")
371
372 # Batch Inference
373 generated_ids = model.generate(**inputs, max_new_tokens=128)
374 generated_ids_trimmed = [
375 out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
376 ]
377 output_texts = processor.batch_decode(
378 generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
379 )
380 print(output_texts)
381 ```
382 </details>
383
384 ### 🤖 ModelScope
385 We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.
386
387
388 ### More Usage Tips
389
390 For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
391
392 ```python
393 # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
394 ## Local file path
395 messages = [
396 {
397 "role": "user",
398 "content": [
399 {"type": "image", "image": "file:///path/to/your/image.jpg"},
400 {"type": "text", "text": "Describe this image."},
401 ],
402 }
403 ]
404 ## Image URL
405 messages = [
406 {
407 "role": "user",
408 "content": [
409 {"type": "image", "image": "http://path/to/your/image.jpg"},
410 {"type": "text", "text": "Describe this image."},
411 ],
412 }
413 ]
414 ## Base64 encoded image
415 messages = [
416 {
417 "role": "user",
418 "content": [
419 {"type": "image", "image": "data:image;base64,/9j/..."},
420 {"type": "text", "text": "Describe this image."},
421 ],
422 }
423 ]
424 ```
425 #### Image Resolution for performance boost
426
427 The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
428
429 ```python
430 min_pixels = 256 * 28 * 28
431 max_pixels = 1280 * 28 * 28
432 processor = AutoProcessor.from_pretrained(
433 "Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
434 )
435 ```
436
437 Besides, We provide two methods for fine-grained control over the image size input to the model:
438
439 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
440
441 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
442
443 ```python
444 # min_pixels and max_pixels
445 messages = [
446 {
447 "role": "user",
448 "content": [
449 {
450 "type": "image",
451 "image": "file:///path/to/your/image.jpg",
452 "resized_height": 280,
453 "resized_width": 420,
454 },
455 {"type": "text", "text": "Describe this image."},
456 ],
457 }
458 ]
459 # resized_height and resized_width
460 messages = [
461 {
462 "role": "user",
463 "content": [
464 {
465 "type": "image",
466 "image": "file:///path/to/your/image.jpg",
467 "min_pixels": 50176,
468 "max_pixels": 50176,
469 },
470 {"type": "text", "text": "Describe this image."},
471 ],
472 }
473 ]
474 ```
475
476 ### Processing Long Texts
477
478 The current `config.json` is set for context length up to 32,768 tokens.
479 To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
480
481 For supported frameworks, you could add the following to `config.json` to enable YaRN:
482
483 {
484 ...,
485 "type": "yarn",
486 "mrope_section": [
487 16,
488 24,
489 24
490 ],
491 "factor": 4,
492 "original_max_position_embeddings": 32768
493 }
494
495 However, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.
496
497 At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.
498
499
500
501
502 ## Citation
503
504 If you find our work helpful, feel free to give us a cite.
505
506 ```
507 @misc{qwen2.5-VL,
508 title = {Qwen2.5-VL},
509 url = {https://qwenlm.github.io/blog/qwen2.5-vl/},
510 author = {Qwen Team},
511 month = {January},
512 year = {2025}
513 }
514
515 @article{Qwen2VL,
516 title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
517 author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
518 journal={arXiv preprint arXiv:2409.12191},
519 year={2024}
520 }
521
522 @article{Qwen-VL,
523 title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
524 author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
525 journal={arXiv preprint arXiv:2308.12966},
526 year={2023}
527 }
528 ```
529