README.md
16.3 KB · 353 lines · markdown Raw
1 ---
2 library_name: transformers
3 license: apache-2.0
4 license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
5 pipeline_tag: text-generation
6 base_model:
7 - Qwen/Qwen3-14B-Base
8 ---
9
10 # Qwen3-14B
11 <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
12 <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
13 </a>
14
15 ## Qwen3 Highlights
16
17 Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
18
19 - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
20 - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
21 - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
22 - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
23 - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
24
25 ## Model Overview
26
27 **Qwen3-14B** has the following features:
28 - Type: Causal Language Models
29 - Training Stage: Pretraining & Post-training
30 - Number of Parameters: 14.8B
31 - Number of Paramaters (Non-Embedding): 13.2B
32 - Number of Layers: 40
33 - Number of Attention Heads (GQA): 40 for Q and 8 for KV
34 - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
35
36 For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
37
38 ## Quickstart
39
40 The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
41
42 With `transformers<4.51.0`, you will encounter the following error:
43 ```
44 KeyError: 'qwen3'
45 ```
46
47 The following contains a code snippet illustrating how to use the model generate content based on given inputs.
48 ```python
49 from transformers import AutoModelForCausalLM, AutoTokenizer
50
51 model_name = "Qwen/Qwen3-14B"
52
53 # load the tokenizer and the model
54 tokenizer = AutoTokenizer.from_pretrained(model_name)
55 model = AutoModelForCausalLM.from_pretrained(
56 model_name,
57 torch_dtype="auto",
58 device_map="auto"
59 )
60
61 # prepare the model input
62 prompt = "Give me a short introduction to large language model."
63 messages = [
64 {"role": "user", "content": prompt}
65 ]
66 text = tokenizer.apply_chat_template(
67 messages,
68 tokenize=False,
69 add_generation_prompt=True,
70 enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
71 )
72 model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
73
74 # conduct text completion
75 generated_ids = model.generate(
76 **model_inputs,
77 max_new_tokens=32768
78 )
79 output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
80
81 # parsing thinking content
82 try:
83 # rindex finding 151668 (</think>)
84 index = len(output_ids) - output_ids[::-1].index(151668)
85 except ValueError:
86 index = 0
87
88 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
89 content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
90
91 print("thinking content:", thinking_content)
92 print("content:", content)
93 ```
94
95 For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
96 - SGLang:
97 ```shell
98 python -m sglang.launch_server --model-path Qwen/Qwen3-14B --reasoning-parser qwen3
99 ```
100 - vLLM:
101 ```shell
102 vllm serve Qwen/Qwen3-14B --enable-reasoning --reasoning-parser deepseek_r1
103 ```
104
105 For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
106
107 ## Switching Between Thinking and Non-Thinking Mode
108
109 > [!TIP]
110 > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
111 > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
112
113 ### `enable_thinking=True`
114
115 By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
116
117 ```python
118 text = tokenizer.apply_chat_template(
119 messages,
120 tokenize=False,
121 add_generation_prompt=True,
122 enable_thinking=True # True is the default value for enable_thinking
123 )
124 ```
125
126 In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
127
128 > [!NOTE]
129 > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
130
131
132 ### `enable_thinking=False`
133
134 We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
135
136 ```python
137 text = tokenizer.apply_chat_template(
138 messages,
139 tokenize=False,
140 add_generation_prompt=True,
141 enable_thinking=False # Setting enable_thinking=False disables thinking mode
142 )
143 ```
144
145 In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
146
147 > [!NOTE]
148 > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
149
150 ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
151
152 We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
153
154 Here is an example of a multi-turn conversation:
155
156 ```python
157 from transformers import AutoModelForCausalLM, AutoTokenizer
158
159 class QwenChatbot:
160 def __init__(self, model_name="Qwen/Qwen3-14B"):
161 self.tokenizer = AutoTokenizer.from_pretrained(model_name)
162 self.model = AutoModelForCausalLM.from_pretrained(model_name)
163 self.history = []
164
165 def generate_response(self, user_input):
166 messages = self.history + [{"role": "user", "content": user_input}]
167
168 text = self.tokenizer.apply_chat_template(
169 messages,
170 tokenize=False,
171 add_generation_prompt=True
172 )
173
174 inputs = self.tokenizer(text, return_tensors="pt")
175 response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
176 response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
177
178 # Update history
179 self.history.append({"role": "user", "content": user_input})
180 self.history.append({"role": "assistant", "content": response})
181
182 return response
183
184 # Example Usage
185 if __name__ == "__main__":
186 chatbot = QwenChatbot()
187
188 # First input (without /think or /no_think tags, thinking mode is enabled by default)
189 user_input_1 = "How many r's in strawberries?"
190 print(f"User: {user_input_1}")
191 response_1 = chatbot.generate_response(user_input_1)
192 print(f"Bot: {response_1}")
193 print("----------------------")
194
195 # Second input with /no_think
196 user_input_2 = "Then, how many r's in blueberries? /no_think"
197 print(f"User: {user_input_2}")
198 response_2 = chatbot.generate_response(user_input_2)
199 print(f"Bot: {response_2}")
200 print("----------------------")
201
202 # Third input with /think
203 user_input_3 = "Really? /think"
204 print(f"User: {user_input_3}")
205 response_3 = chatbot.generate_response(user_input_3)
206 print(f"Bot: {response_3}")
207 ```
208
209 > [!NOTE]
210 > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
211 > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
212
213 ## Agentic Use
214
215 Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
216
217 To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
218 ```python
219 from qwen_agent.agents import Assistant
220
221 # Define LLM
222 llm_cfg = {
223 'model': 'Qwen3-14B',
224
225 # Use the endpoint provided by Alibaba Model Studio:
226 # 'model_type': 'qwen_dashscope',
227 # 'api_key': os.getenv('DASHSCOPE_API_KEY'),
228
229 # Use a custom endpoint compatible with OpenAI API:
230 'model_server': 'http://localhost:8000/v1', # api_base
231 'api_key': 'EMPTY',
232
233 # Other parameters:
234 # 'generate_cfg': {
235 # # Add: When the response content is `<think>this is the thought</think>this is the answer;
236 # # Do not add: When the response has been separated by reasoning_content and content.
237 # 'thought_in_content': True,
238 # },
239 }
240
241 # Define Tools
242 tools = [
243 {'mcpServers': { # You can specify the MCP configuration file
244 'time': {
245 'command': 'uvx',
246 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
247 },
248 "fetch": {
249 "command": "uvx",
250 "args": ["mcp-server-fetch"]
251 }
252 }
253 },
254 'code_interpreter', # Built-in tools
255 ]
256
257 # Define Agent
258 bot = Assistant(llm=llm_cfg, function_list=tools)
259
260 # Streaming generation
261 messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
262 for responses in bot.run(messages=messages):
263 pass
264 print(responses)
265 ```
266
267 ## Processing Long Texts
268
269 Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
270
271 YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
272
273 - Modifying the model files:
274 In the `config.json` file, add the `rope_scaling` fields:
275 ```json
276 {
277 ...,
278 "rope_scaling": {
279 "rope_type": "yarn",
280 "factor": 4.0,
281 "original_max_position_embeddings": 32768
282 }
283 }
284 ```
285 For `llama.cpp`, you need to regenerate the GGUF file after the modification.
286
287 - Passing command line arguments:
288
289 For `vllm`, you can use
290 ```shell
291 vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
292 ```
293
294 For `sglang`, you can use
295 ```shell
296 python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
297 ```
298
299 For `llama-server` from `llama.cpp`, you can use
300 ```shell
301 llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
302 ```
303
304 > [!IMPORTANT]
305 > If you encounter the following warning
306 > ```
307 > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
308 > ```
309 > please upgrade `transformers>=4.51.0`.
310
311 > [!NOTE]
312 > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
313 > We advise adding the `rope_scaling` configuration only when processing long contexts is required.
314 > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
315
316 > [!NOTE]
317 > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
318
319 > [!TIP]
320 > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
321
322 ## Best Practices
323
324 To achieve optimal performance, we recommend the following settings:
325
326 1. **Sampling Parameters**:
327 - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
328 - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
329 - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
330
331 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
332
333 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
334 - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
335 - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
336
337 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
338
339 ### Citation
340
341 If you find our work helpful, feel free to give us a cite.
342
343 ```
344 @misc{qwen3technicalreport,
345 title={Qwen3 Technical Report},
346 author={Qwen Team},
347 year={2025},
348 eprint={2505.09388},
349 archivePrefix={arXiv},
350 primaryClass={cs.CL},
351 url={https://arxiv.org/abs/2505.09388},
352 }
353 ```