README.md
5.5 KB · 155 lines · markdown Raw
1 ---
2 language:
3 - en
4 datasets:
5 - liuhaotian/LLaVA-Instruct-150K
6 pipeline_tag: image-text-to-text
7 arxiv: 2304.08485
8 license: llama2
9 tags:
10 - vision
11 - image-text-to-text
12 ---
13 # LLaVA Model Card
14
15 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/FPshq08TKYD0e-qwPLDVO.png)
16
17 Below is the model card of Llava model 7b, which is copied from the original Llava model card that you can find [here](https://huggingface.co/liuhaotian/llava-v1.5-13b).
18
19 Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing)
20
21 Or check out our Spaces demo! [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces/llava-hf/llava-4bit)
22
23
24 ## Model details
25
26 **Model type:**
27 LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
28 It is an auto-regressive language model, based on the transformer architecture.
29
30 **Model date:**
31 LLaVA-v1.5-7B was trained in September 2023.
32
33 **Paper or resources for more information:**
34 https://llava-vl.github.io/
35
36 ## How to use the model
37
38 First, make sure to have `transformers >= 4.35.3`.
39 The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
40
41 ### Using `pipeline`:
42
43 Below we used [`"llava-hf/llava-1.5-7b-hf"`](https://huggingface.co/llava-hf/llava-1.5-7b-hf) checkpoint.
44
45 ```python
46 from transformers import pipeline
47
48 pipe = pipeline("image-text-to-text", model="llava-hf/llava-1.5-7b-hf")
49 messages = [
50 {
51 "role": "user",
52 "content": [
53 {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"},
54 {"type": "text", "text": "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"},
55 ],
56 },
57 ]
58
59 out = pipe(text=messages, max_new_tokens=20)
60 print(out)
61 >>> [{'input_text': [{'role': 'user', 'content': [{'type': 'image', 'url': 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg'}, {'type': 'text', 'text': 'What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud'}]}], 'generated_text': 'Lava'}]
62 ```
63
64 ### Using pure `transformers`:
65
66 Below is an example script to run generation in `float16` precision on a GPU device:
67
68 ```python
69 import requests
70 from PIL import Image
71
72 import torch
73 from transformers import AutoProcessor, LlavaForConditionalGeneration
74
75 model_id = "llava-hf/llava-1.5-7b-hf"
76 model = LlavaForConditionalGeneration.from_pretrained(
77 model_id,
78 torch_dtype=torch.float16,
79 low_cpu_mem_usage=True,
80 ).to(0)
81
82 processor = AutoProcessor.from_pretrained(model_id)
83
84 # Define a chat history and use `apply_chat_template` to get correctly formatted prompt
85 # Each value in "content" has to be a list of dicts with types ("text", "image")
86 conversation = [
87 {
88
89 "role": "user",
90 "content": [
91 {"type": "text", "text": "What are these?"},
92 {"type": "image"},
93 ],
94 },
95 ]
96 prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
97
98 image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
99 raw_image = Image.open(requests.get(image_file, stream=True).raw)
100 inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to(0, torch.float16)
101
102 output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
103 print(processor.decode(output[0][2:], skip_special_tokens=True))
104 ```
105
106 -----------
107 From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
108 Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
109
110 ```python
111 messages = [
112 {
113 "role": "user",
114 "content": [
115 {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
116 {"type": "text", "text": "What is shown in this image?"},
117 ],
118 },
119 ]
120
121 inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
122 output = model.generate(**inputs, max_new_tokens=50)
123 ```
124
125 ### Model optimization
126
127 #### 4-bit quantization through `bitsandbytes` library
128
129 First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
130
131 ```diff
132 model = LlavaForConditionalGeneration.from_pretrained(
133 model_id,
134 torch_dtype=torch.float16,
135 low_cpu_mem_usage=True,
136 + load_in_4bit=True
137 )
138 ```
139
140 #### Use Flash-Attention 2 to further speed-up generation
141
142 First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
143
144 ```diff
145 model = LlavaForConditionalGeneration.from_pretrained(
146 model_id,
147 torch_dtype=torch.float16,
148 low_cpu_mem_usage=True,
149 + use_flash_attention_2=True
150 ).to(0)
151 ```
152
153 ## License
154 Llama 2 is licensed under the LLAMA 2 Community License,
155 Copyright (c) Meta Platforms, Inc. All Rights Reserved.