README.md
26.1 KB · 517 lines · markdown Raw
1 ---
2 library_name: transformers
3 license: apache-2.0
4 license_link: https://ai.google.dev/gemma/docs/gemma_4_license
5 pipeline_tag: any-to-any
6 base_model:
7 - google/gemma-4-E4B
8 ---
9
10 <div align="center">
11 <img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
12 </div>
13
14
15 <p align="center">
16 <a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
17 <a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
18 <a href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/" target="_blank">Launch Blog</a> |
19 <a href="https://ai.google.dev/gemma/docs/core" target="_blank">Documentation</a>
20 <br>
21 <b>License</b>: <a href="https://ai.google.dev/gemma/docs/gemma_4_license" target="_blank">Apache 2.0</a> | <b>Authors</b>: <a href="https://deepmind.google/models/gemma/" target="_blank">Google DeepMind</a>
22 </p>
23
24 Gemma is a family of open models built by Google DeepMind. Gemma 4 models are multimodal, handling text and image input (with audio supported on small models) and generating text output. This release includes open-weights models in both pre-trained and instruction-tuned variants. Gemma 4 features a context window of up to 256K tokens and maintains multilingual support in over 140 languages.
25
26 Featuring both Dense and Mixture-of-Experts (MoE) architectures, Gemma 4 is well-suited for tasks like text generation, coding, and reasoning. The models are available in four distinct sizes: **E2B**, **E4B**, **26B A4B**, and **31B**. Their diverse sizes make them deployable in environments ranging from high-end phones to laptops and servers, democratizing access to state-of-the-art AI.
27
28 Gemma 4 introduces key **capability and architectural advancements**:
29
30 * **Reasoning** – All models in the family are designed as highly capable reasoners, with configurable thinking modes.
31
32 * **Extended Multimodalities** – Processes Text, Image with variable aspect ratio and resolution support (all models), Video, and Audio (featured natively on the E2B and E4B models).
33
34 * **Diverse & Efficient Architectures** – Offers Dense and Mixture-of-Experts (MoE) variants of different sizes for scalable deployment.
35
36 * **Optimized for On-Device** – Smaller models are specifically designed for efficient local execution on laptops and mobile devices.
37
38 * **Increased Context Window** – The small models feature a 128K context window, while the medium models support 256K.
39
40 * **Enhanced Coding & Agentic Capabilities** – Achieves notable improvements in coding benchmarks alongside native function-calling support, powering highly capable autonomous agents.
41
42 * **Native System Prompt Support** – Gemma 4 introduces native support for the `system` role, enabling more structured and controllable conversations.
43
44 ## **Models Overview**
45
46 Gemma 4 models are designed to deliver frontier-level performance at each size, targeting deployment scenarios from mobile and edge devices (E2B, E4B) to consumer GPUs and workstations (26B A4B, 31B). They are well-suited for reasoning, agentic workflows, coding, and multimodal understanding.
47
48 The models employ a hybrid attention mechanism that interleaves local sliding window attention with full global attention, ensuring the final layer is always global. This hybrid design delivers the processing speed and low memory footprint of a lightweight model without sacrificing the deep awareness required for complex, long-context tasks. To optimize memory for long contexts, global layers feature unified Keys and Values, and apply Proportional RoPE (p-RoPE).
49
50 ### Dense Models
51
52 | Property | E2B | E4B | 31B Dense |
53 | :---- | :---- | :---- | :---- |
54 | **Total Parameters** | 2.3B effective (5.1B with embeddings) | 4.5B effective (8B with embeddings) | 30.7B |
55 | **Layers** | 35 | 42 | 60 |
56 | **Sliding Window** | 512 tokens | 512 tokens | 1024 tokens |
57 | **Context Length** | 128K tokens | 128K tokens | 256K tokens |
58 | **Vocabulary Size** | 262K | 262K | 262K |
59 | **Supported Modalities** | Text, Image, Audio | Text, Image, Audio | Text, Image |
60 | **Vision Encoder Parameters** | *~150M* | *~150M* | *~550M* |
61 | **Audio Encoder Parameters** | *~300M* | *~300M* | No Audio |
62
63 The "E" in E2B and E4B stands for "effective" parameters. The smaller models incorporate Per-Layer Embeddings (PLE) to maximize parameter efficiency in on-device deployments. Rather than adding more layers or parameters to the model, PLE gives each decoder layer its own small embedding for every token. These embedding tables are large but are only used for quick lookups, which is why the effective parameter count is much smaller than the total.
64
65 ### Mixture-of-Experts (MoE) Model
66
67 | Property | 26B A4B MoE |
68 | :---- | :---- |
69 | **Total Parameters** | 25.2B |
70 | **Active Parameters** | 3.8B |
71 | **Layers** | 30 |
72 | **Sliding Window** | 1024 tokens |
73 | **Context Length** | 256K tokens |
74 | **Vocabulary Size** | 262K |
75 | **Expert Count** | 8 active / 128 total and 1 shared |
76 | **Supported Modalities** | Text, Image |
77 | **Vision Encoder Parameters** | *~550M* |
78
79 The "A" in 26B A4B stands for "active parameters" in contrast to the total number of parameters the model contains. By only activating a 4B subset of parameters during inference, the Mixture-of-Experts model runs much faster than its 26B total might suggest. This makes it an excellent choice for fast inference compared to the dense 31B model since it runs almost as fast as a 4B-parameter model.
80
81 ## **Benchmark Results**
82
83 These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation. Evaluation results marked in the table are for instruction-tuned models.
84
85 | | Gemma 4 31B | Gemma 4 26B A4B | Gemma 4 E4B | Gemma 4 E2B | Gemma 3 27B (no think) |
86 | :---- | :---- | :---- | :---- | :---- | :---- |
87 | MMLU Pro | 85.2% | 82.6% | 69.4% | 60.0% | 67.6% |
88 | AIME 2026 no tools | 89.2% | 88.3% | 42.5% | 37.5% | 20.8% |
89 | LiveCodeBench v6 | 80.0% | 77.1% | 52.0% | 44.0% | 29.1% |
90 | Codeforces ELO | 2150 | 1718 | 940 | 633 | 110 |
91 | GPQA Diamond | 84.3% | 82.3% | 58.6% | 43.4% | 42.4% |
92 | Tau2 (average over 3) | 76.9% | 68.2% | 42.2% | 24.5% | 16.2% |
93 | HLE no tools | 19.5% | 8.7% | - | - | - |
94 | HLE with search | 26.5% | 17.2% | - | - | - |
95 | BigBench Extra Hard | 74.4% | 64.8% | 33.1% | 21.9% | 19.3% |
96 | MMMLU | 88.4% | 86.3% | 76.6% | 67.4% | 70.7% |
97 | **Vision** | | | | | |
98 | MMMU Pro | 76.9% | 73.8% | 52.6% | 44.2% | 49.7% |
99 | OmniDocBench 1.5 (average edit distance, lower is better) | 0.131 | 0.149 | 0.181 | 0.290 | 0.365 |
100 | MATH-Vision | 85.6% | 82.4% | 59.5% | 52.4% | 46.0% |
101 | MedXPertQA MM | 61.3% | 58.1% | 28.7% | 23.5% | - |
102 | **Audio** | | | | | |
103 | CoVoST | - | - | 35.54 | 33.47 | - |
104 | FLEURS (lower is better) | - | - | 0.08 | 0.09 | - |
105 | **Long Context** | | | | | |
106 | MRCR v2 8 needle 128k (average) | 66.4% | 44.1% | 25.4% | 19.1% | 13.5% |
107
108 ## **Core Capabilities**
109
110 Gemma 4 models handle a broad range of tasks across text, vision, and audio. Key capabilities include:
111
112 * **Thinking** – Built-in reasoning mode that lets the model think step-by-step before answering.
113 * **Long Context** – Context windows of up to 128K tokens (E2B/E4B) and 256K tokens (26B A4B/31B).
114 * **Image Understanding** – Object detection, Document/PDF parsing, screen and UI understanding, chart comprehension, OCR (including multilingual), handwriting recognition, and pointing. Images can be processed at variable aspect ratios and resolutions.
115 * **Video Understanding** – Analyze video by processing sequences of frames.
116 * **Interleaved Multimodal Input** – Freely mix text and images in any order within a single prompt.
117 * **Function Calling** – Native support for structured tool use, enabling agentic workflows.
118 * **Coding** – Code generation, completion, and correction.
119 * **Multilingual** – Out-of-the-box support for 35+ languages, pre-trained on 140+ languages.
120 * **Audio** (E2B and E4B only) – Automatic speech recognition (ASR) and speech-to-translated-text translation across multiple languages.
121
122
123 ## Getting Started
124
125 You can use all Gemma 4 models with the latest version of Transformers. To get started, install the necessary dependencies in your environment:
126
127 `pip install -U transformers torch accelerate`
128
129 Once you have everything installed, you can proceed to load the model with the code below:
130
131 ```python
132 from transformers import AutoProcessor, AutoModelForCausalLM
133
134 MODEL_ID = "google/gemma-4-E4B-it"
135
136 # Load model
137 processor = AutoProcessor.from_pretrained(MODEL_ID)
138 model = AutoModelForCausalLM.from_pretrained(
139 MODEL_ID,
140 dtype="auto",
141 device_map="auto"
142 )
143 ```
144
145 Once the model is loaded, you can start generating output:
146
147 ```python
148 # Prompt
149 messages = [
150 {"role": "system", "content": "You are a helpful assistant."},
151 {"role": "user", "content": "Write a short joke about saving RAM."},
152 ]
153
154 # Process input
155 text = processor.apply_chat_template(
156 messages,
157 tokenize=False,
158 add_generation_prompt=True,
159 enable_thinking=False
160 )
161 inputs = processor(text=text, return_tensors="pt").to(model.device)
162 input_len = inputs["input_ids"].shape[-1]
163
164 # Generate output
165 outputs = model.generate(**inputs, max_new_tokens=1024)
166 response = processor.decode(outputs[0][input_len:], skip_special_tokens=False)
167
168 # Parse output
169 processor.parse_response(response)
170 ```
171
172 To enable reasoning, set `enable_thinking=True` and the `parse_response` function will take care of parsing the thinking output.
173
174 Below, you will also find snippets for processing audio (E2B and E4B only), images, and video alongside text:
175
176 <details>
177 <summary>Code for processing Audio</summary>
178
179 Instead of using `AutoModelForCausalLM`, you can use `AutoModelForMultimodalLM` to process audio. To use it, make sure to install the following packages:
180
181
182 `pip install -U transformers torch torchvision librosa accelerate`
183
184 You can then load the model with the code below:
185
186 ```python
187 from transformers import AutoProcessor, AutoModelForMultimodalLM
188
189 MODEL_ID = "google/gemma-4-E4B-it"
190
191 # Load model
192 processor = AutoProcessor.from_pretrained(MODEL_ID)
193 model = AutoModelForMultimodalLM.from_pretrained(
194 MODEL_ID,
195 dtype="auto",
196 device_map="auto"
197 )
198 ```
199
200 Once the model is loaded, you can start generating output by directly referencing the audio URL in the prompt:
201
202
203 ```python
204 # Prompt - add audio before text
205 messages = [
206 {
207 "role": "user",
208 "content": [
209 {"type": "audio", "audio": "https://raw.githubusercontent.com/google-gemma/cookbook/refs/heads/main/Demos/sample-data/journal1.wav"},
210 {"type": "text", "text": "Transcribe the following speech segment in its original language. Follow these specific instructions for formatting the answer:\n* Only output the transcription, with no newlines.\n* When transcribing numbers, write the digits, i.e. write 1.7 and not one point seven, and write 3 instead of three."},
211 ]
212 }
213 ]
214
215 # Process input
216 inputs = processor.apply_chat_template(
217 messages,
218 tokenize=True,
219 return_dict=True,
220 return_tensors="pt",
221 add_generation_prompt=True,
222 ).to(model.device)
223 input_len = inputs["input_ids"].shape[-1]
224
225 # Generate output
226 outputs = model.generate(**inputs, max_new_tokens=512)
227 response = processor.decode(outputs[0][input_len:], skip_special_tokens=False)
228
229 # Parse output
230 processor.parse_response(response)
231 ```
232
233 </details>
234
235 <details>
236 <summary>Code for processing Images</summary>
237
238 Instead of using `AutoModelForCausalLM`, you can use `AutoModelForMultimodalLM` to process images. To use it, make sure to install the following packages:
239
240
241 `pip install -U transformers torch torchvision accelerate`
242
243 You can then load the model with the code below:
244
245 ```python
246 from transformers import AutoProcessor, AutoModelForMultimodalLM
247
248 MODEL_ID = "google/gemma-4-E4B-it"
249
250 # Load model
251 processor = AutoProcessor.from_pretrained(MODEL_ID)
252 model = AutoModelForMultimodalLM.from_pretrained(
253 MODEL_ID,
254 dtype="auto",
255 device_map="auto"
256 )
257 ```
258
259 Once the model is loaded, you can start generating output by directly referencing the image URL in the prompt:
260
261
262 ```python
263 # Prompt - add image before text
264 messages = [
265 {
266 "role": "user", "content": [
267 {"type": "image", "url": "https://raw.githubusercontent.com/google-gemma/cookbook/refs/heads/main/Demos/sample-data/GoldenGate.png"},
268 {"type": "text", "text": "What is shown in this image?"}
269 ]
270 }
271 ]
272
273 # Process input
274 inputs = processor.apply_chat_template(
275 messages,
276 tokenize=True,
277 return_dict=True,
278 return_tensors="pt",
279 add_generation_prompt=True,
280 ).to(model.device)
281 input_len = inputs["input_ids"].shape[-1]
282
283 # Generate output
284 outputs = model.generate(**inputs, max_new_tokens=512)
285 response = processor.decode(outputs[0][input_len:], skip_special_tokens=False)
286
287 # Parse output
288 processor.parse_response(response)
289 ```
290
291 </details>
292
293
294 <details>
295 <summary>Code for processing Videos</summary>
296
297 Instead of using `AutoModelForCausalLM`, you can use `AutoModelForMultimodalLM` to process videos. To use it, make sure to install the following packages:
298
299 `pip install -U transformers torch torchvision librosa accelerate`
300
301 You can then load the model with the code below:
302
303 ```python
304 from transformers import AutoProcessor, AutoModelForMultimodalLM
305
306 MODEL_ID = "google/gemma-4-E4B-it"
307
308 # Load model
309 processor = AutoProcessor.from_pretrained(MODEL_ID)
310 model = AutoModelForMultimodalLM.from_pretrained(
311 MODEL_ID,
312 dtype="auto",
313 device_map="auto"
314 )
315 ```
316
317 Once the model is loaded, you can start generating output by directly referencing the video URL in the prompt:
318
319
320 ```python
321 # Prompt - add video before text
322 messages = [
323 {
324 'role': 'user',
325 'content': [
326 {"type": "video", "video": "https://github.com/bebechien/gemma/raw/refs/heads/main/videos/ForBiggerBlazes.mp4"},
327 {'type': 'text', 'text': 'Describe this video.'}
328 ]
329 }
330 ]
331
332 # Process input
333 inputs = processor.apply_chat_template(
334 messages,
335 tokenize=True,
336 return_dict=True,
337 return_tensors="pt",
338 add_generation_prompt=True,
339 ).to(model.device)
340 input_len = inputs["input_ids"].shape[-1]
341
342 # Generate output
343 outputs = model.generate(**inputs, max_new_tokens=512)
344 response = processor.decode(outputs[0][input_len:], skip_special_tokens=False)
345
346 # Parse output
347 processor.parse_response(response)
348 ```
349
350 </details>
351
352
353
354 ## **Best Practices**
355
356 For the best performance, use these configurations and best practices:
357
358 ### 1. Sampling Parameters
359
360 Use the following standardized sampling configuration across all use cases:
361
362 * `temperature=1.0`
363 * `top_p=0.95`
364 * `top_k=64`
365
366 ### 2. Thinking Mode Configuration
367
368 Compared to Gemma 3, the models use standard `system`, `assistant`, and `user` roles. To properly manage the thinking process, use the following control tokens:
369
370 * **Trigger Thinking:** Thinking is enabled by including the `<|think|>` token at the start of the system prompt. To disable thinking, remove the token.
371 * **Standard Generation:** When thinking is enabled, the model will output its internal reasoning followed by the final answer using this structure:
372 `<|channel>thought\n`**[Internal reasoning]**`<channel|>`
373 * **Disabled Thinking Behavior:** For all models except for the E2B and E4B variants, if thinking is disabled, the model will still generate the tags but with an empty thought block:
374 `<|channel>thought\n<channel|>`**[Final answer]**
375
376 > [!Note]
377 > Note that many libraries like Transformers and llama.cpp handle the complexities of the chat template for you.
378
379 ### 3. Multi-Turn Conversations
380
381 * **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final response. Thoughts from previous model turns must *not be added* before the next user turn begins.
382
383 ### 4. Modality order
384
385 * For optimal performance with multimodal inputs, place image and/or audio content **before** the text in your prompt.
386
387 ### 5. Variable Image Resolution
388
389 Aside from variable aspect ratios, Gemma 4 supports variable image resolution through a configurable visual token budget, which controls how many tokens are used to represent an image. A higher token budget preserves more visual detail at the cost of additional compute, while a lower budget enables faster inference for tasks that don't require fine-grained understanding.
390
391 * The supported token budgets are: **70**, **140**, **280**, **560**, and **1120**.
392 * Use *lower budgets* for classification, captioning, or video understanding, where faster inference and processing many frames outweigh fine-grained detail.
393 * Use *higher budgets* for tasks like OCR, document parsing, or reading small text.
394
395 ### 6. Audio
396
397 Use the following prompt structures for audio processing:
398
399 * **Audio Speech Recognition (ASR)**
400
401 ```text
402 Transcribe the following speech segment in {LANGUAGE} into {LANGUAGE} text.
403
404 Follow these specific instructions for formatting the answer:
405 * Only output the transcription, with no newlines.
406 * When transcribing numbers, write the digits, i.e. write 1.7 and not one point seven, and write 3 instead of three.
407 ```
408
409 * **Automatic Speech Translation (AST)**
410
411 ```text
412 Transcribe the following speech segment in {SOURCE_LANGUAGE}, then translate it into {TARGET_LANGUAGE}.
413 When formatting the answer, first output the transcription in {SOURCE_LANGUAGE}, then one newline, then output the string '{TARGET_LANGUAGE}: ', then the translation in {TARGET_LANGUAGE}.
414 ```
415
416 ### 7. Audio and Video Length
417
418 All models support image inputs and can process videos as frames whereas the E2B and E4B models also support audio inputs. Audio supports a maximum length of 30 seconds. Video supports a maximum of 60 seconds assuming the images are processed at one frame per second.
419
420 ## **Model Data**
421
422 Data used for model training and how the data was processed.
423
424 ### **Training Dataset**
425
426 Our pre-training dataset is a large-scale, diverse collection of data encompassing a wide range of domains and modalities, which includes web documents, code, images, audio, with a cutoff date of January 2025. Here are the key components:
427
428 * **Web Documents**: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages.
429 * **Code**: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions.
430 * **Mathematics**: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries.
431 * **Images**: A wide range of images enables the model to perform image analysis and visual data extraction tasks.
432
433 The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats.
434
435 ### **Data Preprocessing**
436
437 Here are the key data cleaning and filtering methods applied to the training data:
438
439 * **CSAM Filtering**: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content.
440 * **Sensitive Data Filtering**: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets.
441 * **Additional methods**: Filtering based on content quality and safety in line with [our policies](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf).
442
443 ## **Ethics and Safety**
444
445 As open models become central to enterprise infrastructure, provenance and security are paramount. Developed by Google DeepMind, Gemma 4 undergoes the same rigorous safety evaluations as our proprietary Gemini models.
446
447 ### **Evaluation Approach**
448
449 Gemma 4 models were developed in partnership with internal safety and responsible AI teams. A range of automated as well as human evaluations were conducted to help improve model safety. These evaluations align with [Google’s AI principles](https://ai.google/principles/), as well as safety policies, which aim to prevent our generative AI models from generating harmful content, including:
450
451 * Content related to child sexual abuse material and exploitation
452 * Dangerous content (e.g., promoting suicide, or instructing in activities that could cause real-world harm)
453 * Sexually explicit content
454 * Hate speech (e.g., dehumanizing members of protected groups)
455 * Harassment (e.g., encouraging violence against people)
456
457 ### **Evaluation Results**
458
459 For all areas of safety testing, we saw major improvements in all categories of content safety relative to previous Gemma models. Overall, Gemma 4 models significantly outperform Gemma 3 and 3n models in improving safety, while keeping unjustified refusals low. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance.
460
461 ## **Usage and Limitations**
462
463 These models have certain limitations that users should be aware of.
464
465 ### **Intended Usage**
466
467 Multimodal models (capable of processing vision, language, and/or audio) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
468
469 * **Content Creation and Communication**
470 * **Text Generation**: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts.
471 * **Chatbots and Conversational AI**: Power conversational interfaces for customer service, virtual assistants, or interactive applications.
472 * **Text Summarization**: Generate concise summaries of a text corpus, research papers, or reports.
473 * **Image Data Extraction**: These models can be used to extract, interpret, and summarize visual data for text communications.
474 * **Audio Processing and Interaction**: The smaller models (E2B and E4B) can analyze and interpret audio inputs, enabling voice-driven interactions and transcriptions.
475 * **Research and Education**
476 * **Natural Language Processing (NLP) and VLM Research**: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field.
477 * **Language Learning Tools**: Support interactive language learning experiences, aiding in grammar correction or providing writing practice.
478 * **Knowledge Exploration**: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics.
479
480 ### **Limitations**
481
482 * **Training Data**
483 * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses.
484 * The scope of the training dataset determines the subject areas the model can handle effectively.
485 * **Context and Task Complexity**
486 * Models perform well on tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging.
487 * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point).
488 * **Language Ambiguity and Nuance**
489 * Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language.
490 * **Factual Accuracy**
491 * Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements.
492 * **Common Sense**
493 * Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations.
494
495 ### **Ethical Considerations and Risks**
496
497 The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
498
499 * **Bias and Fairness**
500 * VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. Gemma 4 models underwent careful scrutiny, input data pre-processing, and post-training evaluations as reported in this card to help mitigate the risk of these biases.
501 * **Misinformation and Misuse**
502 * VLMs can be misused to generate text that is false, misleading, or harmful.
503 * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
504 * **Transparency and Accountability**
505 * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
506 * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
507
508 **Risks identified and mitigations**:
509
510 * **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases.
511 * **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided.
512 * **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
513 * **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases.
514
515 ### **Benefits**
516
517 At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models.