README.md
12.3 KB · 332 lines · markdown Raw
1 ---
2 library_name: transformers
3 tags:
4 - translation
5 language:
6 - zh
7 - en
8 - fr
9 - pt
10 - es
11 - ja
12 - tr
13 - ru
14 - ar
15 - ko
16 - th
17 - it
18 - de
19 - vi
20 - ms
21 - id
22 - tl
23 - hi
24 - pl
25 - cs
26 - nl
27 - km
28 - my
29 - fa
30 - gu
31 - ur
32 - te
33 - mr
34 - he
35 - bn
36 - ta
37 - uk
38 - bo
39 - kk
40 - mn
41 - ug
42 ---
43
44 # <span style="color: #7FFF7F;">Hunyuan-MT-7B GGUF Models</span>
45
46
47 ## <span style="color: #7F7FFF;">Model Generation Details</span>
48
49 This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`c8dedc99`](https://github.com/ggerganov/llama.cpp/commit/c8dedc9999eccf7821a9fe5b29f10e8d075e2217).
50
51
52
53
54
55 ---
56
57 ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
58
59 I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
60
61 In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
62 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
63
64 While this does increase model file size, it significantly improves precision for a given quantization level.
65
66 ### **I'd love your feedback—have you tried this? How does it perform for you?**
67
68
69
70
71 ---
72
73 <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
74 Click here to get info on choosing the right GGUF model format
75 </a>
76
77 ---
78
79
80
81 <!--Begin Original Model Card-->
82
83
84
85 <p align="center">
86 <img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
87 </p><p></p>
88
89
90 <p align="center">
91 🤗&nbsp;<a href="https://huggingface.co/collections/tencent/hunyuan-mt-68b42f76d473f82798882597"><b>Hugging Face</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
92 🕹️&nbsp;<a href="https://hunyuan.tencent.com/modelSquare/home/list"><b>Demo</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
93 🤖&nbsp;<a href="https://modelscope.cn/collections/Hunyuan-MT-2ca6b8e1b4934f"><b>ModelScope</b></a>
94 </p>
95
96 <p align="center">
97 🖥️&nbsp;<a href="https://hunyuan.tencent.com"><b>Official Website</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
98 <a href="https://github.com/Tencent-Hunyuan/Hunyuan-MT"><b>GitHub</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
99 <a href="https://www.arxiv.org/abs/2509.05209"><b>Technical Report</b></a>
100 </p>
101
102
103 ## Model Introduction
104
105 The Hunyuan Translation Model comprises a translation model, Hunyuan-MT-7B, and an ensemble model, Hunyuan-MT-Chimera. The translation model is used to translate source text into the target language, while the ensemble model integrates multiple translation outputs to produce a higher-quality result. It primarily supports mutual translation among 33 languages, including five ethnic minority languages in China.
106
107 ### Key Features and Advantages
108
109 - In the WMT25 competition, the model achieved first place in 30 out of the 31 language categories it participated in.
110 - Hunyuan-MT-7B achieves industry-leading performance among models of comparable scale
111 - Hunyuan-MT-Chimera-7B is the industry’s first open-source translation ensemble model, elevating translation quality to a new level
112 - A comprehensive training framework for translation models has been proposed, spanning from pretrain → cross-lingual pretraining (CPT) → supervised fine-tuning (SFT) → translation enhancement → ensemble refinement, achieving state-of-the-art (SOTA) results for models of similar size
113
114 ## Related News
115 * 2025.9.1 We have open-sourced **Hunyuan-MT-7B** , **Hunyuan-MT-Chimera-7B** on Hugging Face.
116 <br>
117
118
119 &nbsp;
120
121 ## 模型链接
122 | Model Name | Description | Download |
123 | ----------- | ----------- |-----------
124 | Hunyuan-MT-7B | Hunyuan 7B translation model |🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-7B)|
125 | Hunyuan-MT-7B-fp8 | Hunyuan 7B translation model,fp8 quant | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-7B-fp8)|
126 | Hunyuan-MT-Chimera | Hunyuan 7B translation ensemble model | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B)|
127 | Hunyuan-MT-Chimera-fp8 | Hunyuan 7B translation ensemble model,fp8 quant | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B-fp8)|
128
129 ## Prompts
130
131 ### Prompt Template for ZH<=>XX Translation.
132
133 ```
134
135 把下面的文本翻译成<target_language>,不要额外解释。
136
137 <source_text>
138
139 ```
140
141 ### Prompt Template for XX<=>XX Translation, excluding ZH<=>XX.
142
143 ```
144
145 Translate the following segment into <target_language>, without additional explanation.
146
147 <source_text>
148
149 ```
150
151 ### Prompt Template for Hunyuan-MT-Chmeria-7B
152
153 ```
154
155 Analyze the following multiple <target_language> translations of the <source_language> segment surrounded in triple backticks and generate a single refined <target_language> translation. Only output the refined translation, do not explain.
156
157 The <source_language> segment:
158 ```<source_text>```
159
160 The multiple <target_language> translations:
161 1. ```<translated_text1>```
162 2. ```<translated_text2>```
163 3. ```<translated_text3>```
164 4. ```<translated_text4>```
165 5. ```<translated_text5>```
166 6. ```<translated_text6>```
167
168 ```
169
170 &nbsp;
171
172 ### Use with transformers
173 First, please install transformers, recommends v4.56.0
174 ```SHELL
175 pip install transformers==v4.56.0
176 ```
177
178 The following code snippet shows how to use the transformers library to load and apply the model.
179
180 *!!! If you want to load fp8 model with transformers, you need to change the name"ignored_layers" in config.json to "ignore" and upgrade the compressed-tensors to compressed-tensors-0.11.0.*
181
182 we use tencent/Hunyuan-MT-7B for example
183
184 ```python
185 from transformers import AutoModelForCausalLM, AutoTokenizer
186 import os
187
188 model_name_or_path = "tencent/Hunyuan-MT-7B"
189
190 tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
191 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
192 messages = [
193 {"role": "user", "content": "Translate the following segment into Chinese, without additional explanation.\n\nIt’s on the house."},
194 ]
195 tokenized_chat = tokenizer.apply_chat_template(
196 messages,
197 tokenize=True,
198 add_generation_prompt=False,
199 return_tensors="pt"
200 )
201
202 outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
203 output_text = tokenizer.decode(outputs[0])
204 ```
205
206 We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
207
208 ```json
209 {
210 "top_k": 20,
211 "top_p": 0.6,
212 "repetition_penalty": 1.05,
213 "temperature": 0.7
214 }
215 ```
216
217 Supported languages:
218 | Languages | Abbr. | Chinese Names |
219 |-------------------|---------|-----------------|
220 | Chinese | zh | 中文 |
221 | English | en | 英语 |
222 | French | fr | 法语 |
223 | Portuguese | pt | 葡萄牙语 |
224 | Spanish | es | 西班牙语 |
225 | Japanese | ja | 日语 |
226 | Turkish | tr | 土耳其语 |
227 | Russian | ru | 俄语 |
228 | Arabic | ar | 阿拉伯语 |
229 | Korean | ko | 韩语 |
230 | Thai | th | 泰语 |
231 | Italian | it | 意大利语 |
232 | German | de | 德语 |
233 | Vietnamese | vi | 越南语 |
234 | Malay | ms | 马来语 |
235 | Indonesian | id | 印尼语 |
236 | Filipino | tl | 菲律宾语 |
237 | Hindi | hi | 印地语 |
238 | Traditional Chinese | zh-Hant| 繁体中文 |
239 | Polish | pl | 波兰语 |
240 | Czech | cs | 捷克语 |
241 | Dutch | nl | 荷兰语 |
242 | Khmer | km | 高棉语 |
243 | Burmese | my | 缅甸语 |
244 | Persian | fa | 波斯语 |
245 | Gujarati | gu | 古吉拉特语 |
246 | Urdu | ur | 乌尔都语 |
247 | Telugu | te | 泰卢固语 |
248 | Marathi | mr | 马拉地语 |
249 | Hebrew | he | 希伯来语 |
250 | Bengali | bn | 孟加拉语 |
251 | Tamil | ta | 泰米尔语 |
252 | Ukrainian | uk | 乌克兰语 |
253 | Tibetan | bo | 藏语 |
254 | Kazakh | kk | 哈萨克语 |
255 | Mongolian | mn | 蒙古语 |
256 | Uyghur | ug | 维吾尔语 |
257 | Cantonese | yue | 粤语 |
258
259
260 Citing Hunyuan-MT:
261
262 ```bibtex
263 @misc{hunyuan_mt,
264 title={Hunyuan-MT Technical Report},
265 author={Mao Zheng and Zheng Li and Bingxin Qu and Mingyang Song and Yang Du and Mingrui Sun and Di Wang},
266 year={2025},
267 eprint={2509.05209},
268 archivePrefix={arXiv},
269 primaryClass={cs.CL},
270 url={https://arxiv.org/abs/2509.05209},
271 }
272 ```
273
274 <!--End Original Model Card-->
275
276 ---
277
278 # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
279
280 Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
281
282 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
283
284
285 The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
286
287 💬 **How to test**:
288 Choose an **AI assistant type**:
289 - `TurboLLM` (GPT-4.1-mini)
290 - `HugLLM` (Hugginface Open-source models)
291 - `TestLLM` (Experimental CPU-only)
292
293 ### **What I’m Testing**
294 I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
295 - **Function calling** against live network services
296 - **How small can a model go** while still handling:
297 - Automated **Nmap security scans**
298 - **Quantum-readiness checks**
299 - **Network Monitoring tasks**
300
301 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
302 - ✅ **Zero-configuration setup**
303 - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
304 - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
305
306 ### **Other Assistants**
307 🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
308 - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
309 - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
310 - **Real-time network diagnostics and monitoring**
311 - **Security Audits**
312 - **Penetration testing** (Nmap/Metasploit)
313
314 🔵 **HugLLM** – Latest Open-source models:
315 - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
316
317 ### 💡 **Example commands you could test**:
318 1. `"Give me info on my websites SSL certificate"`
319 2. `"Check if my server is using quantum safe encyption for communication"`
320 3. `"Run a comprehensive security audit on my server"`
321 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
322
323 ### Final Word
324
325 I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
326
327 If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
328
329 I'm also open to job opportunities or sponsorship.
330
331 Thank you! 😊
332