README.md
14.4 KB · 310 lines · markdown Raw
1 ---
2 library_name: transformers
3 license: apache-2.0
4 license_link: https://huggingface.co/Qwen/Qwen3-0.6B-FP8/blob/main/LICENSE
5 pipeline_tag: text-generation
6 base_model:
7 - Qwen/Qwen3-0.6B
8 ---
9
10 # Qwen3-0.6B-FP8
11 <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
12 <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
13 </a>
14
15 ## Qwen3 Highlights
16
17 Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
18
19 - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
20 - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
21 - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
22 - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
23 - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
24
25 ## Model Overview
26
27 This repo contains the FP8 version of **Qwen3-0.6B**, which has the following features:
28 - Type: Causal Language Models
29 - Training Stage: Pretraining & Post-training
30 - Number of Parameters: 0.6B
31 - Number of Paramaters (Non-Embedding): 0.44B
32 - Number of Layers: 28
33 - Number of Attention Heads (GQA): 16 for Q and 8 for KV
34 - Context Length: 32,768
35
36 For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
37
38 > [!TIP]
39 > If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
40
41 ## Quickstart
42
43 The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
44
45 With `transformers<4.51.0`, you will encounter the following error:
46 ```
47 KeyError: 'qwen3'
48 ```
49
50 The following contains a code snippet illustrating how to use the model generate content based on given inputs.
51 ```python
52 from transformers import AutoModelForCausalLM, AutoTokenizer
53
54 model_name = "Qwen/Qwen3-0.6B-FP8"
55
56 # load the tokenizer and the model
57 tokenizer = AutoTokenizer.from_pretrained(model_name)
58 model = AutoModelForCausalLM.from_pretrained(
59 model_name,
60 torch_dtype="auto",
61 device_map="auto"
62 )
63
64 # prepare the model input
65 prompt = "Give me a short introduction to large language model."
66 messages = [
67 {"role": "user", "content": prompt}
68 ]
69 text = tokenizer.apply_chat_template(
70 messages,
71 tokenize=False,
72 add_generation_prompt=True,
73 enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
74 )
75 model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
76
77 # conduct text completion
78 generated_ids = model.generate(
79 **model_inputs,
80 max_new_tokens=32768
81 )
82 output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
83
84 # parsing thinking content
85 try:
86 # rindex finding 151668 (</think>)
87 index = len(output_ids) - output_ids[::-1].index(151668)
88 except ValueError:
89 index = 0
90
91 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
92 content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
93
94 print("thinking content:", thinking_content)
95 print("content:", content)
96 ```
97
98 For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
99 - SGLang:
100 ```shell
101 python -m sglang.launch_server --model-path Qwen/Qwen3-0.6B-FP8 --reasoning-parser qwen3
102 ```
103 - vLLM:
104 ```shell
105 vllm serve Qwen/Qwen3-0.6B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
106 ```
107
108 For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
109
110 ## Note on FP8
111
112 For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
113
114 You can use the Qwen3-0.6B-FP8 model with serveral inference frameworks, including `transformers`, `sglang`, and `vllm`, as the original bfloat16 model.
115 However, please pay attention to the following known issues:
116 - `transformers`:
117 - there are currently issues with the "fine-grained fp8" method in `transformers` for distributed inference. You may need to set the environment variable `CUDA_LAUNCH_BLOCKING=1` if multiple devices are used in inference.
118
119 ## Switching Between Thinking and Non-Thinking Mode
120
121 > [!TIP]
122 > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
123 > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
124
125 ### `enable_thinking=True`
126
127 By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
128
129 ```python
130 text = tokenizer.apply_chat_template(
131 messages,
132 tokenize=False,
133 add_generation_prompt=True,
134 enable_thinking=True # True is the default value for enable_thinking
135 )
136 ```
137
138 In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
139
140 > [!NOTE]
141 > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
142
143
144 ### `enable_thinking=False`
145
146 We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
147
148 ```python
149 text = tokenizer.apply_chat_template(
150 messages,
151 tokenize=False,
152 add_generation_prompt=True,
153 enable_thinking=False # Setting enable_thinking=False disables thinking mode
154 )
155 ```
156
157 In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
158
159 > [!NOTE]
160 > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
161
162 ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
163
164 We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
165
166 Here is an example of a multi-turn conversation:
167
168 ```python
169 from transformers import AutoModelForCausalLM, AutoTokenizer
170
171 class QwenChatbot:
172 def __init__(self, model_name="Qwen/Qwen3-0.6B-FP8"):
173 self.tokenizer = AutoTokenizer.from_pretrained(model_name)
174 self.model = AutoModelForCausalLM.from_pretrained(model_name)
175 self.history = []
176
177 def generate_response(self, user_input):
178 messages = self.history + [{"role": "user", "content": user_input}]
179
180 text = self.tokenizer.apply_chat_template(
181 messages,
182 tokenize=False,
183 add_generation_prompt=True
184 )
185
186 inputs = self.tokenizer(text, return_tensors="pt")
187 response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
188 response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
189
190 # Update history
191 self.history.append({"role": "user", "content": user_input})
192 self.history.append({"role": "assistant", "content": response})
193
194 return response
195
196 # Example Usage
197 if __name__ == "__main__":
198 chatbot = QwenChatbot()
199
200 # First input (without /think or /no_think tags, thinking mode is enabled by default)
201 user_input_1 = "How many r's in strawberries?"
202 print(f"User: {user_input_1}")
203 response_1 = chatbot.generate_response(user_input_1)
204 print(f"Bot: {response_1}")
205 print("----------------------")
206
207 # Second input with /no_think
208 user_input_2 = "Then, how many r's in blueberries? /no_think"
209 print(f"User: {user_input_2}")
210 response_2 = chatbot.generate_response(user_input_2)
211 print(f"Bot: {response_2}")
212 print("----------------------")
213
214 # Third input with /think
215 user_input_3 = "Really? /think"
216 print(f"User: {user_input_3}")
217 response_3 = chatbot.generate_response(user_input_3)
218 print(f"Bot: {response_3}")
219 ```
220
221 > [!NOTE]
222 > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
223 > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
224
225 ## Agentic Use
226
227 Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
228
229 To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
230 ```python
231 from qwen_agent.agents import Assistant
232
233 # Define LLM
234 llm_cfg = {
235 'model': 'Qwen3-0.6B-FP8',
236
237 # Use the endpoint provided by Alibaba Model Studio:
238 # 'model_type': 'qwen_dashscope',
239 # 'api_key': os.getenv('DASHSCOPE_API_KEY'),
240
241 # Use a custom endpoint compatible with OpenAI API:
242 'model_server': 'http://localhost:8000/v1', # api_base
243 'api_key': 'EMPTY',
244
245 # Other parameters:
246 # 'generate_cfg': {
247 # # Add: When the response content is `<think>this is the thought</think>this is the answer;
248 # # Do not add: When the response has been separated by reasoning_content and content.
249 # 'thought_in_content': True,
250 # },
251 }
252
253 # Define Tools
254 tools = [
255 {'mcpServers': { # You can specify the MCP configuration file
256 'time': {
257 'command': 'uvx',
258 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
259 },
260 "fetch": {
261 "command": "uvx",
262 "args": ["mcp-server-fetch"]
263 }
264 }
265 },
266 'code_interpreter', # Built-in tools
267 ]
268
269 # Define Agent
270 bot = Assistant(llm=llm_cfg, function_list=tools)
271
272 # Streaming generation
273 messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
274 for responses in bot.run(messages=messages):
275 pass
276 print(responses)
277 ```
278
279 ## Best Practices
280
281 To achieve optimal performance, we recommend the following settings:
282
283 1. **Sampling Parameters**:
284 - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
285 - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
286 - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
287
288 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
289
290 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
291 - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
292 - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
293
294 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
295
296 ### Citation
297
298 If you find our work helpful, feel free to give us a cite.
299
300 ```
301 @misc{qwen3technicalreport,
302 title={Qwen3 Technical Report},
303 author={Qwen Team},
304 year={2025},
305 eprint={2505.09388},
306 archivePrefix={arXiv},
307 primaryClass={cs.CL},
308 url={https://arxiv.org/abs/2505.09388},
309 }
310 ```