README.md
18.4 KB · 292 lines · markdown Raw
1 ---
2 license: mit
3 license_link: https://huggingface.co/microsoft/Phi-3.5-vision-instruct/resolve/main/LICENSE
4 language:
5 - multilingual
6 pipeline_tag: image-text-to-text
7 tags:
8 - nlp
9 - code
10 - vision
11 inference:
12 parameters:
13 temperature: 0.7
14 widget:
15 - messages:
16 - role: user
17 content: <|image_1|>Can you describe what you see in the image?
18 library_name: transformers
19 ---
20 ## Model Summary
21
22 Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
23
24 🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
25 📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br>
26 📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br>
27 👩‍🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
28 🖥️ [Try It](https://aka.ms/try-phi3.5vision) <br>
29
30 **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
31
32 ## Intended Uses
33
34 ### Primary Use Cases
35
36 The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications with visual and text input capabilities which require:
37
38 1) Memory/compute constrained environments
39 2) Latency bound scenarios
40 3) General image understanding
41 4) Optical character recognition
42 5) Chart and table understanding
43 6) Multiple image comparison
44 7) Multi-image or video clip summarization
45
46 Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
47
48 ### Use Case Considerations
49
50 Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
51
52 ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
53
54 ## Release Notes
55
56 In this release, the model enables multi-frame image understanding and reasoning which is based on valuable customer feedback. The hero example multi-frame capabilities include detailed image comparison, multi-image summarization/storytelling and video summarization, which have broad applications in Office scenarios. We also observed performance improvement on most single image benchmarks, e.g., boost MMMU performance from 40.2 to 43.0, MMBench performance from 80.5 to 81.9, document understanding benchmark TextVQA from 70.9 to 72.0. We believe most use cases will benefit from this release, but we encourage users to test the new model in their AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family and continue to welcome all the feedback from the community.
57
58 Below are the comparison results on existing multi-image benchmarks. On average, our model outperforms competitor models on the same size and competitive with much bigger models on multi-frame capabilities and video summarization.
59
60 **BLINK**: a benchmark with 14 visual tasks that humans can solve very quickly but are still hard for current multimodal LLMs.
61
62 | Benchmark | Phi-3.5-vision-instruct | LlaVA-Interleave-Qwen-7B | InternVL-2-4B | InternVL-2-8B | Gemini-1.5-Flash | GPT-4o-mini | Claude-3.5-Sonnet | Gemini-1.5-Pro | GPT-4o |
63 |--|--|--|--|--|--|--|--|--|--|
64 | Art Style | 87.2 | 62.4 | 55.6 | 52.1 | 64.1 | 70.1 | 59.8 | 70.9 | 73.3 |
65 | Counting | 54.2 | 56.7 | 54.2 | 66.7 | 51.7 | 55.0 | 59.2 | 65.0 | 65.0 |
66 | Forensic Detection | 92.4 | 31.1 | 40.9 | 34.1 | 54.5 | 38.6 | 67.4 | 60.6 | 75.8 |
67 | Functional Correspondence | 29.2 | 34.6 | 24.6 | 24.6 | 33.1 | 26.9 | 33.8 | 31.5 | 43.8 |
68 | IQ Test | 25.3 | 26.7 | 26.0 | 30.7 | 25.3 | 29.3 | 26.0 | 34.0 | 19.3 |
69 | Jigsaw | 68.0 | 86.0 | 55.3 | 52.7 | 71.3 | 72.7 | 57.3 | 68.0 | 67.3 |
70 | Multi-View Reasoning | 54.1 | 44.4 | 48.9 | 42.9 | 48.9 | 48.1 | 55.6 | 49.6 | 46.6 |
71 | Object Localization | 49.2 | 54.9 | 53.3 | 54.1 | 44.3 | 57.4 | 62.3 | 65.6 | 68.0 |
72 | Relative Depth | 69.4 | 77.4 | 63.7 | 67.7 | 57.3 | 58.1 | 71.8 | 76.6 | 71.0 |
73 | Relative Reflectance | 37.3 | 34.3 | 32.8 | 38.8 | 32.8 | 27.6 | 36.6 | 38.8 | 40.3 |
74 | Semantic Correspondence | 36.7 | 31.7 | 31.7 | 22.3 | 32.4 | 31.7 | 45.3 | 48.9 | 54.0 |
75 | Spatial Relation | 65.7 | 75.5 | 78.3 | 78.3 | 55.9 | 81.1 | 60.1 | 79.0 | 84.6 |
76 | Visual Correspondence | 53.5 | 40.7 | 34.9 | 33.1 | 29.7 | 52.9 | 72.1 | 81.4 | 86.0 |
77 | Visual Similarity | 83.0 | 91.9 | 48.1 | 45.2 | 47.4 | 77.8 | 84.4 | 81.5 | 88.1 |
78 | **Overall** | **57.0** | **53.1** | **45.9** | **45.4** | **45.8** | **51.9** | **56.5** | **61.0** | **63.2** |
79
80 **Video-MME**: comprehensively assess the capabilities of MLLMs in processing video data, covering a wide range of visual domains, temporal durations, and data modalities.
81
82 | Benchmark | Phi-3.5-vision-instruct | LlaVA-Interleave-Qwen-7B | InternVL-2-4B | InternVL-2-8B | Gemini-1.5-Flash | GPT-4o-mini | Claude-3.5-Sonnet | Gemini-1.5-Pro | GPT-4o |
83 |--|--|--|--|--|--|--|--|--|--|
84 | short (<2min) | 60.8 | 62.3 | 60.7 | 61.7 | 72.2 | 70.1 | 66.3 | 73.3 | 77.7 |
85 | medium (4-15min) | 47.7 | 47.1 | 46.4 | 49.6 | 62.7 | 59.6 | 54.7 | 61.2 | 68.0 |
86 | long (30-60min) | 43.8 | 41.2 | 42.6 | 46.6 | 52.1 | 53.9 | 46.6 | 53.2 | 59.6 |
87 | **Overall** | **50.8** | **50.2** | **49.9** | **52.6** | **62.3** | **61.2** | **55.9** | **62.6** | **68.4** |
88
89 ## Usage
90
91 ### Requirements
92
93 The current `transformers` version can be verified with: `pip list | grep transformers`.
94
95 Examples of required packages:
96 ```
97 flash_attn==2.5.8
98 numpy==1.24.4
99 Pillow==10.3.0
100 Requests==2.31.0
101 torch==2.3.0
102 torchvision==0.18.0
103 transformers==4.43.0
104 accelerate==0.30.0
105 ```
106
107 Phi-3.5-vision-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5vision).
108
109 ### Input Formats
110 Given the nature of the training data, the Phi-3.5-vision model is best suited for prompts using the chat format as follows:
111
112 Single image:
113 ```
114 <|user|>\n<|image_1|>\n{prompt}<|end|>\n<|assistant|>\n
115 ```
116
117 Multi-turn conversations:
118 ```
119 <|user|>\n<|image_1|>\n{prompt_1}<|end|>\n<|assistant|>\n{response_1}<|end|>\n<|user|>\n{prompt_2}<|end|>\n<|assistant|>\n
120 ```
121
122 For multi-image usage, add multiple image placeholders in the front of the prompts. <|image_{}|> index should start from 1. One example of prompt is shown as follows:
123 ```
124 <|user|>\n<|image_1|>\n<|image_2|>\n<|image_3|>\n<|image_4|>\n{prompt}<|end|>\n<|assistant|>\n
125 ```
126
127 ### Loading the model locally
128 After obtaining the Phi-3.5-vision-instruct model checkpoints, users can use this sample code for inference.
129
130 ```python
131 from PIL import Image
132 import requests
133 from transformers import AutoModelForCausalLM
134 from transformers import AutoProcessor
135
136 model_id = "microsoft/Phi-3.5-vision-instruct"
137
138 # Note: set _attn_implementation='eager' if you don't have flash_attn installed
139 model = AutoModelForCausalLM.from_pretrained(
140 model_id,
141 device_map="cuda",
142 trust_remote_code=True,
143 torch_dtype="auto",
144 _attn_implementation='flash_attention_2'
145 )
146
147 # for best performance, use num_crops=4 for multi-frame, num_crops=16 for single-frame.
148 processor = AutoProcessor.from_pretrained(model_id,
149 trust_remote_code=True,
150 num_crops=4
151 )
152
153 images = []
154 placeholder = ""
155
156 # Note: if OOM, you might consider reduce number of frames in this example.
157 for i in range(1,20):
158 url = f"https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-{i}-2048.jpg"
159 images.append(Image.open(requests.get(url, stream=True).raw))
160 placeholder += f"<|image_{i}|>\n"
161
162 messages = [
163 {"role": "user", "content": placeholder+"Summarize the deck of slides."},
164 ]
165
166 prompt = processor.tokenizer.apply_chat_template(
167 messages,
168 tokenize=False,
169 add_generation_prompt=True
170 )
171
172 inputs = processor(prompt, images, return_tensors="pt").to("cuda:0")
173
174 generation_args = {
175 "max_new_tokens": 1000,
176 "temperature": 0.0,
177 "do_sample": False,
178 }
179
180 generate_ids = model.generate(**inputs,
181 eos_token_id=processor.tokenizer.eos_token_id,
182 **generation_args
183 )
184
185 # remove input tokens
186 generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:]
187 response = processor.batch_decode(generate_ids,
188 skip_special_tokens=True,
189 clean_up_tokenization_spaces=False)[0]
190
191 print(response)
192 ```
193
194 Notes:
195 + to achieve best performances we suggest to set _num_crops=4_ for multi-frame and _num_crops=16_ for single-frame.
196 + to turn off flash_attention users can set __attn_implementation='eager'_
197
198 ## Responsible AI Considerations
199
200 Like other models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
201 * Quality of Service: The Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
202 * Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
203 * Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
204 * Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
205 * Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
206
207 Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
208
209 * Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
210 * High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
211 * Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
212 * Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
213 * Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
214 * Identification of individuals: models with vision capabilities may have the potential to uniquely identify individuals in images. Safety post-training steers the model to refuse such requests, but developers should consider and implement, as appropriate, additional mitigations or user consent flows as required in their respective jurisdiction, (e.g., building measures to blur faces in image inputs before processing).
215
216 ## Training
217
218 ### Models
219
220 **Architecture:** Phi-3.5-vision has 4.2B parameters and contains image encoder, connector, projector, and Phi-3 Mini language model.<br>
221 **Inputs:** Text and Image. It’s best suited for prompts using the chat format.<br>
222 **Context length:** 128K tokens<br>
223 **GPUs:** 256 A100-80G<br>
224 **Training time:** 6 days<br>
225 **Training data:** 500B tokens (vision tokens + text tokens)<br>
226 **Outputs:** Generated text in response to the input<br>
227 **Dates:** Trained between July and August 2024<br>
228 **Status:** This is a static model trained on an offline text dataset with cutoff date March 15, 2024. Future versions of the tuned models may be released as we improve models.<br>
229 **Release date:** August 2024<br>
230
231 ### Data Overview
232
233 Our training data includes a wide variety of sources, and is a combination of
234 1) publicly available documents filtered rigorously for quality, selected high-quality educational data and code;
235 2) selected high-quality image-text interleave data;
236 3) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.), newly created image data, e.g., chart/table/diagram/slides, newly created multi-image and video data, e.g., short video clips/pair of two similar images;
237 4) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
238
239 The data collection process involved sourcing information from publicly available documents, with a meticulous approach to filtering out undesirable documents and images. To safeguard privacy, we carefully filtered various image and text data sources to remove or scrub any potentially personal data from the training data. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
240
241 ### How to finetune?
242 We recommend user to take a look at the [Phi-3 CookBook finetuning recipe for Vision](https://github.com/microsoft/Phi-3CookBook/blob/main/md/04.Fine-tuning/FineTuning_Vision.md)
243
244 ## Benchmarks
245
246 To understand the capabilities, we compare Phi-3.5-vision with a set of models over a variety of zero-shot benchmarks using our internal benchmark platform. At the high-level overview of the model quality on representative benchmarks:
247
248 | Category | Benchmark | Phi-3.5-vision-instruct | Intern-VL-2-4B | Intern-VL-2-8B | Gemini-1.5-Flash | GPT-4o-mini 2024-7-18 | Claude-3.5-Sonnet | Gemini-1.5-Pro | GPT-4o 2024-5-13 |
249 |--|--|--|--|--|--|--|--|--|--|
250 | Popular aggregated benchmark | MMMU (val) | 43.0 | 44.22 | 46.33 | 49.33 | 52.1 | 52.67 | 54.11 | 61.78 |
251 | | MMBench (dev-en) | 81.9 | 83.4 | 87.0 | 85.7 | 83.8 | 82.3 | 87.9 | 88.4 |
252 | Visual scientific knowledge reasoning | ScienceQA (img-test) | 91.3 | 94.9 | 95.9 | 84.5 | 84.0 | 73.8 | 86.0 | 88.5 |
253 | Visual math reasoning | MathVista (testmini) | 43.9 | 53.7 | 51.1 | 55.3 | 38.8 | 54.0 | 57.4 | 54.4 |
254 | | InterGPS (test) | 36.3 | 45.6 | 53.2 | 39.4 | 39.9 | 45.6 | 58.2 | 46.9 |
255 | Chart reasoning | AI2D (test) | 78.1 | 77.3 | 81.4 | 78.4 | 75.2 | 68.9 | 75.6 | 82.8 |
256 | | ChartQA (test) | 81.8 | 78.8 | 80.4 | 57.6 | 54.5 | 73.2 | 68.2 | 64.0 |
257 | Document Intelligence | TextVQA (val) | 72.0 | 66.2 | 68.8 | 67.4 | 70.9 | 70.5 | 64.5 | 75.6 |
258 | Object visual presence verification | POPE (test) | 86.1 | 83.3 | 84.2 | 86.1 | 83.6 | 76.6 | 89.3 | 87.0 |
259
260 ## Safety Evaluation and Red-Teaming
261
262 **Approach**
263 The Phi-3 family of models has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated datasets.
264 The overall technique employed to do the safety alignment is a combination of SFT (Supervised Fine-Tuning) and RLHF (Reinforcement Learning from Human Feedback) approaches
265 by utilizing human-labeled and synthetic English-language datasets, including publicly available datasets focusing on helpfulness and harmlessness as well as various
266 questions and answers targeted to multiple safety categories.
267
268 **Safety Evaluation**
269 We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and safety evaluation benchmark datasets to evaluate Phi-3.5
270 models' propensity to produce undesirable outputs across multiple risk categories. Several approaches were used to compensate for the limitations of one approach alone.
271 Please refer to the [technical report](https://arxiv.org/pdf/2404.14219) for more details of our safety alignment.
272
273
274 ## Software
275 * [PyTorch](https://github.com/pytorch/pytorch)
276 * [Transformers](https://github.com/huggingface/transformers)
277 * [Flash-Attention](https://github.com/HazyResearch/flash-attention)
278
279 ## Hardware
280 Note that by default, the Phi-3.5-Mini-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
281 * NVIDIA A100
282 * NVIDIA A6000
283 * NVIDIA H100
284
285 ## License
286 The model is licensed under the [MIT license](./LICENSE).
287
288 ## Trademarks
289 This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
290
291 ## Data Summary
292 https://huggingface.co/microsoft/Phi-3.5-vision-instruct/blob/main/data_summary_card.md