README.md
| 1 | --- |
| 2 | library_name: transformers |
| 3 | license: apache-2.0 |
| 4 | license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE |
| 5 | pipeline_tag: text-generation |
| 6 | --- |
| 7 | |
| 8 | # Qwen3-4B-Instruct-2507 |
| 9 | <a href="https://chat.qwen.ai" target="_blank" style="margin: 2px;"> |
| 10 | <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> |
| 11 | </a> |
| 12 | |
| 13 | ## Highlights |
| 14 | |
| 15 | We introduce the updated version of the **Qwen3-4B non-thinking mode**, named **Qwen3-4B-Instruct-2507**, featuring the following key enhancements: |
| 16 | |
| 17 | - **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**. |
| 18 | - **Substantial gains** in long-tail knowledge coverage across **multiple languages**. |
| 19 | - **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation. |
| 20 | - **Enhanced capabilities** in **256K long-context understanding**. |
| 21 | |
| 22 |  |
| 23 | |
| 24 | ## Model Overview |
| 25 | |
| 26 | **Qwen3-4B-Instruct-2507** has the following features: |
| 27 | - Type: Causal Language Models |
| 28 | - Training Stage: Pretraining & Post-training |
| 29 | - Number of Parameters: 4.0B |
| 30 | - Number of Paramaters (Non-Embedding): 3.6B |
| 31 | - Number of Layers: 36 |
| 32 | - Number of Attention Heads (GQA): 32 for Q and 8 for KV |
| 33 | - Context Length: **262,144 natively**. |
| 34 | |
| 35 | **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.** |
| 36 | |
| 37 | For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). |
| 38 | |
| 39 | |
| 40 | ## Performance |
| 41 | |
| 42 | | | GPT-4.1-nano-2025-04-14 | Qwen3-30B-A3B Non-Thinking | Qwen3-4B Non-Thinking | Qwen3-4B-Instruct-2507 | |
| 43 | |--- | --- | --- | --- | --- | |
| 44 | | **Knowledge** | | | | |
| 45 | | MMLU-Pro | 62.8 | 69.1 | 58.0 | **69.6** | |
| 46 | | MMLU-Redux | 80.2 | 84.1 | 77.3 | **84.2** | |
| 47 | | GPQA | 50.3 | 54.8 | 41.7 | **62.0** | |
| 48 | | SuperGPQA | 32.2 | 42.2 | 32.0 | **42.8** | |
| 49 | | **Reasoning** | | | | |
| 50 | | AIME25 | 22.7 | 21.6 | 19.1 | **47.4** | |
| 51 | | HMMT25 | 9.7 | 12.0 | 12.1 | **31.0** | |
| 52 | | ZebraLogic | 14.8 | 33.2 | 35.2 | **80.2** | |
| 53 | | LiveBench 20241125 | 41.5 | 59.4 | 48.4 | **63.0** | |
| 54 | | **Coding** | | | | |
| 55 | | LiveCodeBench v6 (25.02-25.05) | 31.5 | 29.0 | 26.4 | **35.1** | |
| 56 | | MultiPL-E | 76.3 | 74.6 | 66.6 | **76.8** | |
| 57 | | Aider-Polyglot | 9.8 | **24.4** | 13.8 | 12.9 | |
| 58 | | **Alignment** | | | | |
| 59 | | IFEval | 74.5 | **83.7** | 81.2 | 83.4 | |
| 60 | | Arena-Hard v2* | 15.9 | 24.8 | 9.5 | **43.4** | |
| 61 | | Creative Writing v3 | 72.7 | 68.1 | 53.6 | **83.5** | |
| 62 | | WritingBench | 66.9 | 72.2 | 68.5 | **83.4** | |
| 63 | | **Agent** | | | | |
| 64 | | BFCL-v3 | 53.0 | 58.6 | 57.6 | **61.9** | |
| 65 | | TAU1-Retail | 23.5 | 38.3 | 24.3 | **48.7** | |
| 66 | | TAU1-Airline | 14.0 | 18.0 | 16.0 | **32.0** | |
| 67 | | TAU2-Retail | - | 31.6 | 28.1 | **40.4** | |
| 68 | | TAU2-Airline | - | 18.0 | 12.0 | **24.0** | |
| 69 | | TAU2-Telecom | - | **18.4** | 17.5 | 13.2 | |
| 70 | | **Multilingualism** | | | | |
| 71 | | MultiIF | 60.7 | **70.8** | 61.3 | 69.0 | |
| 72 | | MMLU-ProX | 56.2 | **65.1** | 49.6 | 61.6 | |
| 73 | | INCLUDE | 58.6 | **67.8** | 53.8 | 60.1 | |
| 74 | | PolyMATH | 15.6 | 23.3 | 16.6 | **31.1** | |
| 75 | |
| 76 | *: For reproducibility, we report the win rates evaluated by GPT-4.1. |
| 77 | |
| 78 | |
| 79 | ## Quickstart |
| 80 | |
| 81 | The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. |
| 82 | |
| 83 | With `transformers<4.51.0`, you will encounter the following error: |
| 84 | ``` |
| 85 | KeyError: 'qwen3' |
| 86 | ``` |
| 87 | |
| 88 | The following contains a code snippet illustrating how to use the model generate content based on given inputs. |
| 89 | ```python |
| 90 | from transformers import AutoModelForCausalLM, AutoTokenizer |
| 91 | |
| 92 | model_name = "Qwen/Qwen3-4B-Instruct-2507" |
| 93 | |
| 94 | # load the tokenizer and the model |
| 95 | tokenizer = AutoTokenizer.from_pretrained(model_name) |
| 96 | model = AutoModelForCausalLM.from_pretrained( |
| 97 | model_name, |
| 98 | torch_dtype="auto", |
| 99 | device_map="auto" |
| 100 | ) |
| 101 | |
| 102 | # prepare the model input |
| 103 | prompt = "Give me a short introduction to large language model." |
| 104 | messages = [ |
| 105 | {"role": "user", "content": prompt} |
| 106 | ] |
| 107 | text = tokenizer.apply_chat_template( |
| 108 | messages, |
| 109 | tokenize=False, |
| 110 | add_generation_prompt=True, |
| 111 | ) |
| 112 | model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
| 113 | |
| 114 | # conduct text completion |
| 115 | generated_ids = model.generate( |
| 116 | **model_inputs, |
| 117 | max_new_tokens=16384 |
| 118 | ) |
| 119 | output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() |
| 120 | |
| 121 | content = tokenizer.decode(output_ids, skip_special_tokens=True) |
| 122 | |
| 123 | print("content:", content) |
| 124 | ``` |
| 125 | |
| 126 | For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: |
| 127 | - SGLang: |
| 128 | ```shell |
| 129 | python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Instruct-2507 --context-length 262144 |
| 130 | ``` |
| 131 | - vLLM: |
| 132 | ```shell |
| 133 | vllm serve Qwen/Qwen3-4B-Instruct-2507 --max-model-len 262144 |
| 134 | ``` |
| 135 | |
| 136 | **Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.** |
| 137 | |
| 138 | For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. |
| 139 | |
| 140 | ## Agentic Use |
| 141 | |
| 142 | Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. |
| 143 | |
| 144 | To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. |
| 145 | ```python |
| 146 | from qwen_agent.agents import Assistant |
| 147 | |
| 148 | # Define LLM |
| 149 | llm_cfg = { |
| 150 | 'model': 'Qwen3-4B-Instruct-2507', |
| 151 | |
| 152 | # Use a custom endpoint compatible with OpenAI API: |
| 153 | 'model_server': 'http://localhost:8000/v1', # api_base |
| 154 | 'api_key': 'EMPTY', |
| 155 | } |
| 156 | |
| 157 | # Define Tools |
| 158 | tools = [ |
| 159 | {'mcpServers': { # You can specify the MCP configuration file |
| 160 | 'time': { |
| 161 | 'command': 'uvx', |
| 162 | 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] |
| 163 | }, |
| 164 | "fetch": { |
| 165 | "command": "uvx", |
| 166 | "args": ["mcp-server-fetch"] |
| 167 | } |
| 168 | } |
| 169 | }, |
| 170 | 'code_interpreter', # Built-in tools |
| 171 | ] |
| 172 | |
| 173 | # Define Agent |
| 174 | bot = Assistant(llm=llm_cfg, function_list=tools) |
| 175 | |
| 176 | # Streaming generation |
| 177 | messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] |
| 178 | for responses in bot.run(messages=messages): |
| 179 | pass |
| 180 | print(responses) |
| 181 | ``` |
| 182 | |
| 183 | ## Best Practices |
| 184 | |
| 185 | To achieve optimal performance, we recommend the following settings: |
| 186 | |
| 187 | 1. **Sampling Parameters**: |
| 188 | - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. |
| 189 | - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. |
| 190 | |
| 191 | 2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models. |
| 192 | |
| 193 | 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. |
| 194 | - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. |
| 195 | - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." |
| 196 | |
| 197 | ### Citation |
| 198 | |
| 199 | If you find our work helpful, feel free to give us a cite. |
| 200 | |
| 201 | ``` |
| 202 | @misc{qwen3technicalreport, |
| 203 | title={Qwen3 Technical Report}, |
| 204 | author={Qwen Team}, |
| 205 | year={2025}, |
| 206 | eprint={2505.09388}, |
| 207 | archivePrefix={arXiv}, |
| 208 | primaryClass={cs.CL}, |
| 209 | url={https://arxiv.org/abs/2505.09388}, |
| 210 | } |
| 211 | ``` |