README.md
71.6 KB · 879 lines · markdown Raw
1 ---
2 library_name: transformers
3 license: other
4 license_name: nvidia-nemotron-open-model-license
5 license_link: >-
6 https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/
7 pipeline_tag: text-generation
8 language:
9 - en
10 - es
11 - fr
12 - de
13 - ja
14 - it
15 tags:
16 - nvidia
17 - pytorch
18 datasets:
19 - nvidia/Nemotron-Pretraining-Code-v1
20 - nvidia/Nemotron-CC-v2
21 - nvidia/Nemotron-Pretraining-SFT-v1
22 - nvidia/Nemotron-CC-Math-v1
23 - nvidia/Nemotron-Pretraining-Code-v2
24 - nvidia/Nemotron-Pretraining-Specialized-v1
25 - nvidia/Nemotron-CC-v2.1
26 - nvidia/Nemotron-CC-Code-v1
27 - nvidia/Nemotron-Pretraining-Dataset-sample
28 - nvidia/Nemotron-Competitive-Programming-v1
29 - nvidia/Nemotron-Math-v2
30 - nvidia/Nemotron-Agentic-v1
31 - nvidia/Nemotron-Math-Proofs-v1
32 - nvidia/Nemotron-Instruction-Following-Chat-v1
33 - nvidia/Nemotron-Science-v1
34 - nvidia/Nemotron-3-Nano-RL-Training-Blend
35 track_downloads: true
36 ---
37
38 # NVIDIA-Nemotron-3-Nano-30B-A3B-BF16
39
40 <div align="center" style="line-height: 1;">
41 <a href="https://build.nvidia.com/nvidia/nemotron-3-nano-30b-a3b" target="_blank" style="margin: 2px;">
42 <img alt="Chat" src="https://img.shields.io/badge/🤖Chat-Nemotron_3_Nano-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
43 </a>
44 <a href="https://arxiv.org/abs/2512.20848" target="_blank" style="margin: 2px;">
45 <img alt="Chat" src="https://img.shields.io/badge/📝Paper-Read Now!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
46 </a>
47 <a href="https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets" target="_blank" style="margin: 2px;">
48 <img alt="Pre-Training Datasets" src="https://img.shields.io/badge/🗄️_Pre--Training_Datasets-Available_Here-76B900?logoColor=white" style="display: inline-block; vertical-align: middle;"/>
49 </a>
50 <a href="https://huggingface.co/collections/nvidia/nemotron-post-training-v3" target="_blank" style="margin: 2px;">
51 <img alt="Post-Training Datasets" src="https://img.shields.io/badge/🗄️_Post--Training_Datasets-Available_Here-76B900?logoColor=white" style="display: inline-block; vertical-align: middle;"/>
52 </a>
53 </div>
54 <div align="center" style="line-height: 1;">
55 <a href="https://developer.nvidia.com/nemotron" target="_blank" style="margin: 2px;">
56 <img alt="Homepage" src="https://img.shields.io/badge/🏠Nemotron Developer Page-Learn More Here!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
57 </a>
58 <a href="https://discord.gg/9xpKQtVvrk" target="_blank" style="margin: 2px;">
59 <img alt="Homepage" src="https://img.shields.io/badge/Discord-NVIDIA%20AI%20Developer-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
60 </a>
61 </div>
62
63 <div align="center" style="line-height: 1;">
64 <a href="https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/" style="margin: 2px;">
65 <img alt="License" src="https://img.shields.io/badge/License-NVIDIA Open Model License-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
66 </a>
67 </div>
68
69
70 ![](./accuracy_chart.png)
71
72 ## Model Overview
73
74 **Model Developer:** NVIDIA Corporation
75
76 **Model Dates:**
77
78 September 2025 \- December 2025
79
80 **Data Freshness:**
81
82 * The post-training data has a cutoff date of November 28, 2025\.
83 * The pre-training data has a cutoff date of June 25, 2025\.
84
85 ## Description
86
87 Nemotron-3-Nano-30B-A3B-BF16 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be configured through a flag in the chat template. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so, albeit with a slight decrease in accuracy for harder prompts that require reasoning. Conversely, allowing the model to generate reasoning traces first generally results in higher-quality final solutions to queries and tasks.
88
89 The model employs a hybrid Mixture-of-Experts (MoE) architecture, consisting of 23 Mamba-2 and MoE layers, along with 6 Attention layers. Each MoE layer includes 128 experts plus 1 shared expert, with 6 experts activated per token. The model has 3.5B active parameters and 30B parameters in total.
90
91 The supported languages include: English, German, Spanish, French, Italian, and Japanese. Improved using Qwen.
92
93 This model is ready for commercial use.
94
95 ### What is Nemotron?
96
97 NVIDIA Nemotron™ is a family of open models with open weights, training data, and recipes, delivering leading efficiency and accuracy for building specialized AI agents.
98
99 To get started, you can use [our quickstart guide](#quick-start-guide) below.
100
101 ## License/Terms of Use
102
103 Governing Terms: Use of this model is governed by the [NVIDIA Nemotron Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/).
104
105 ### Reasoning Benchmark Evaluations
106
107 We evaluated our model on the following benchmarks:
108
109 | Task | NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 | Qwen3-30B-A3B-Thinking-2507 | GPT-OSS-20B |
110 | ----- | :---- | :---- | :---- |
111 | **General Knowledge** | | | |
112 | MMLU-Pro | 78.3 | **80.9** | 75.0 |
113 | **Reasoning** | | | |
114 | AIME25 (no tools) | 89.1 | 85.0 | **91.7** |
115 | AIME25 (with tools) | **99.2** | \- | 98.7 |
116 | GPQA (no tools) | 73.0 | **73.4** | 71.5 |
117 | GPQA (with tools) | **75.0** | \- | 74.2 |
118 | LiveCodeBench (v6 2025-08–2025-05) | **68.3** | 66.0 | 61.0 |
119 | SciCode (subtask) | 33.3 | 33.0 | **34.0** |
120 | HLE (no tools) | 10.6 | 9.8 | **10.9** |
121 | HLE (with tools) | 15.5 | \- | **17.3** |
122 | MiniF2F pass@1 | **50.0** | 5.7 | 12.1 |
123 | MiniF2F pass@32 | **79.9** | 16.8 | 43.0 |
124 | **Agentic** | | | |
125 | Terminal Bench (hard subset) | 8.5 | 5.0 | 6.0 |
126 | SWE-Bench (OpenHands) | **38.8** | 22.0 | 34.0 |
127 | TauBench V2 (Airline) | 48.0 | **58.0** | 38.0 |
128 | TauBench V2 (Retail) | 56.9 | **58.8** | 38.0 |
129 | TauBench V2 (Telecom) | 42.2 | 26.3 | **49.7** |
130 | TauBench V2 (Average) | **49.0** | 47.7 | 48.7 |
131 | BFCL v4 | **53.8** | 46.4\* | \- |
132 | **Chat & Instruction Following** | | | |
133 | IFBench (prompt) | **71.5** | 51.0 | 65.0 |
134 | Scale AI Multi Challenge | 38.5 | **44.8** | 33.8 |
135 | Arena-Hard-V2 (Hard Prompt) | **72.1** | 49.6\* | 71.2\* |
136 | Arena-Hard-V2 (Creative Writing) | 63.2 | **66.0\*** | 25.9& |
137 | Arena-Hard-V2 (Average) | **67.7** | 57.8 | 48.6 |
138 | **Long Context** | | | |
139 | AA-LCR | 35.9 | **59.0** | 34.0 |
140 | RULER-100@256k | **92.9** | 89.4 | \- |
141 | RULER-100@512k | **91.3** | 84.0 | \- |
142 | RULER-100@1M | **86.3** | 77.5 | \- |
143 | **Multilingual** | | | |
144 | MMLU-ProX (avg over langs) | 59.5 | **77.6\*** | 69.1\* |
145 | WMT24++ (en-\>xx) | **86.2** | 85.6 | 83.2 |
146
147 All evaluation results were collected via [Nemo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator) and [Nemo Skills](https://github.com/NVIDIA-NeMo/Skills). The open source container on Nemo Skills packaged via NVIDIA’s Nemo Evaluator SDK used for evaluations can be found [here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/eval-factory/containers/nemo_skills?version=25.11). In addition to Nemo Skills, the evaluations also used dedicated packaged containers for Tau-2 Bench, ArenaHard v2, AA_LCR. A reproducibility tutorial along with all configs can be found in [Nemo Evaluator SDK examples](https://github.com/NVIDIA-NeMo/Evaluator/tree/main/packages/nemo-evaluator-launcher/examples/nemotron/nano-v3-reproducibility.md). The configs are also available in this HF repo [here](./nemo-evaluator-launcher-configs/local_nvidia_nemotron_3_nano_30b_a3b.yaml). \* denotes the accuracy numbers are measured by us.
148
149
150 ### Deployment Geography: Global
151
152 ### Use Case
153
154 NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (English, Spanish, French, German, Japanese, Italian) are also supported. This model is intended to be used by developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. This model is also suitable for typical instruction-following tasks.
155
156 ### Release Date
157
158 December 15, 2025 via [Hugging Face](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16)
159
160 ## Reference(s)
161
162 * [NVIDIA Nemotron 3 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v3)
163 * [NVIDIA Nemotron 2 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v2)
164 * [Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning](https://arxiv.org/abs/2512.20848)
165 * [NVIDIA Nemotron 3 White Paper](https://arxiv.org/abs/2512.20856)
166
167 ## Model Architecture
168
169 - **Architecture Type:** Mamba2-Transformer Hybrid Mixture of Experts (MoE)
170 - **Network Architecture:** Nemotron Hybrid MoE
171 - **Number of model parameters:** 30B
172
173 ## Model Design
174
175 The model was trained with 25T tokens, with a batch size of 3072, and used the Warmup-Stable-Decay (WSD) learning rate schedule with 8B tokens of learning rate warm up, peak learning rate of 1e-3 and minimum learning rate of 1e-5. There are a total of 52 layers, of which there are 23 of each MoE and Mamba-2 and the remaining 6 layers use grouped query attention (GQA) with 2 groups. Each MoE layer includes 128 routed experts plus 1 shared expert, with 6 experts activated per token.
176
177 ## Training Methodology
178
179 Stage 1: Pre-Training
180
181 * [NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16) model was pre-trained using crawled and synthetic code, math, science, and general knowledge data. All datasets are disclosed in the [Training, Testing, and Evaluation Datasets](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16#training-testing-and-evaluation-datasets) section of this document. Major portions of the pre-training corpus are released in the [Nemotron-Pre-Training-Datasets](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) collection.
182 * Software used for pre-training: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
183
184 Stage 2: Supervised Fine-Tuning
185
186 * The model was further fine-tuned on synthetic code, math, science, tool calling, instruction following, structured outputs, and general knowledge data. All datasets are disclosed in the [Training, Testing, and Evaluation Datasets](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16#training-testing-and-evaluation-datasets) section of this document. Major portions of the fine-tuning corpus are released in the [Nemotron-Post-Training-v3](https://huggingface.co/collections/nvidia/nemotron-post-training-v3) collection. [Data Designer](https://github.com/NVIDIA-NeMo/DataDesigner) is one of the libraries used to prepare these corpora.
187 *
188
189 Stage 3: Reinforcement Learning
190
191 * The model underwent multi-environment reinforcement learning using synchronous GRPO (Group Relative Policy Optimization) across math, code, science, instruction following, multi-step tool use, multi-turn conversations, and structured output environments. Conversational quality was further refined through RLHF using a [generative reward model](https://huggingface.co/nvidia/Qwen3-Nemotron-235B-A22B-GenRM). All datasets are disclosed in the *Training, Testing, and Evaluation Datasets* section of this document. The RL environments and datasets are released as part of [NeMo Gym](https://github.com/NVIDIA-NeMo/Gym).
192 * Software used for reinforcement learning: [NeMo RL](https://github.com/NVIDIA-NeMo/RL), [NeMo Gym](https://github.com/NVIDIA-NeMo/Gym)
193
194 NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 model is a result of the above work.
195
196 The end-to-end training recipe is available in the [NVIDIA Nemotron Developer Repository](https://github.com/NVIDIA-NeMo/Nemotron). Evaluation results can be replicated using the [NeMo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator). [Data Designer](https://github.com/NVIDIA-NeMo/DataDesigner) is one of the libraries used to prepare the pre and post training datasets. More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://arxiv.org/abs/2512.20848).
197
198 ## Input
199
200 - **Input Type(s):** Text
201
202 - **Input Format(s):** String
203
204 - **Input Parameters:** One-Dimensional (1D): Sequences
205
206 - **Maximum input size:** 1M tokens
207
208 - **Other Properties Related to Input:** Supported languages include: English, Spanish, French, German, Japanese, Italian
209
210 ## Output
211
212 - **Output Type(s):** Text
213
214 - **Output Format:** String
215
216 - **Output Parameters:** One-Dimensional (1D): Sequences
217
218 - **Maximum output size:** 1M tokens
219
220 Our AI models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
221
222 ## Software Integration
223
224 - Runtime Engine(s): NeMo 25.11.01
225 - Supported Hardware Microarchitecture Compatibility: NVIDIA H100-80GB, NVIDIA A100
226 - Operating System(s): Linux
227
228 The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
229
230 ## Quick Start Guide
231
232 ### Use it with Transformers
233
234 The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.57.3). We recommend using [NeMo Framework 25.11.01](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo/tags?version=25.11.01) to ensure all required libraries are available.
235
236 Please note that the model supports up to a 1M context size, although the default context size in the Hugging Face configuration is 256k due to higher VRAM requirements.
237
238 ```
239 import torch
240 from transformers import AutoTokenizer, AutoModelForCausalLM
241
242 # Load tokenizer and model
243 tokenizer = AutoTokenizer.from_pretrained("nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16")
244 model = AutoModelForCausalLM.from_pretrained(
245 "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16",
246 torch_dtype=torch.bfloat16,
247 trust_remote_code=True,
248 device_map="auto"
249 )
250 ```
251
252 ```
253 messages = [
254 {"role": "user", "content": "Write a haiku about GPUs"},
255 ]
256
257 tokenized_chat = tokenizer.apply_chat_template(
258 messages,
259 tokenize=True,
260 add_generation_prompt=True,
261 return_tensors="pt"
262 ).to(model.device)
263
264 outputs = model.generate(
265 tokenized_chat,
266 max_new_tokens=1024,
267 temperature=1.0,
268 top_p=1.0,
269 eos_token_id=tokenizer.eos_token_id
270 )
271 print(tokenizer.decode(outputs[0]))
272 ```
273
274 `temperature=1.0` and `top_p=1.0` are recommended for reasoning tasks, while `temperature=0.6` and `top_p=0.95` are recommended for tool calling.
275
276 If you’d like to use reasoning off, add `enable_thinking=False` to `apply_chat_template()`. By default, `enable_thinking` is set to be `True`.
277
278 ```
279
280 tokenized_chat = tokenizer.apply_chat_template(
281 messages,
282 tokenize=True,
283 enable_thinking=False,
284 add_generation_prompt=True,
285 return_tensors="pt"
286 ).to(model.device)
287
288 # Use Greedy Search for reasoning off
289 outputs = model.generate(
290 tokenized_chat,
291 max_new_tokens=32,
292 do_sample=False,
293 num_beams=1,
294 eos_token_id=tokenizer.eos_token_id
295 )
296 print(tokenizer.decode(outputs[0]))
297 ```
298
299 ### Use it with vLLM
300
301 For more detailed information on how to use the model with vLLM, please see [this cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-3-Nano/vllm\_cookbook.ipynb).
302 If you are on Jetson Thor or DGX Spark, please use [this vllm container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/vllm?version=25.12.post1-py3).
303
304 ```
305 pip install -U "vllm>=0.12.0"
306 ```
307
308 Download the custom parser from the Hugging Face repository.
309
310 ```
311 wget https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/resolve/main/nano_v3_reasoning_parser.py
312 ```
313
314 Launch a vLLM server using the custom parser.
315
316 ```
317 vllm serve nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 \
318 --served-model-name model \
319 --max-num-seqs 8 \
320 --tensor-parallel-size 1 \
321 --max-model-len 262144 \
322 --port 8000 \
323 --trust-remote-code \
324 --enable-auto-tool-choice \
325 --tool-call-parser qwen3_coder \
326 --reasoning-parser-plugin nano_v3_reasoning_parser.py \
327 --reasoning-parser nano_v3
328 ```
329
330 In the example above, we use a context length of 256k. You can increase the context size up to 1M to support longer contexts.
331 To enable this, set the `VLLM_ALLOW_LONG_MAX_MODEL_LEN=1` environment variable as shown below:
332
333 ```
334 VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 \
335 vllm serve nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 \
336 --served-model-name model \
337 --max-num-seqs 8 \
338 --tensor-parallel-size 1 \
339 --max-model-len 1M \
340 --port 8000 \
341 --trust-remote-code \
342 --enable-auto-tool-choice \
343 --tool-call-parser qwen3_coder \
344 --reasoning-parser-plugin nano_v3_reasoning_parser.py \
345 --reasoning-parser nano_v3
346 ```
347
348 Here is an example client code for vLLM. By default, the endpoint has reasoning enabled. We recommend setting a high value (e.g., 10,000) for `max_tokens`.
349
350 ```shell
351 curl http://localhost:8000/v1/chat/completions \
352 -H "Content-Type: application/json" \
353 -d '{
354 "model": "model",
355 "messages":[{"role": "user", "content": "Write a haiku about GPUs"}],
356 "max_tokens": 10000
357 }'
358 ```
359
360 If you’d like to use reasoning off with vLLM, you can do the following:
361 vLLM OpenAI curl request:
362
363 ```shell
364 curl http://localhost:8000/v1/chat/completions \
365 -H "Content-Type: application/json" \
366 -d '{
367 "model": "model",
368 "messages":[{"role": "user", "content": "Write a haiku about GPUs"}],
369 "chat_template_kwargs": {"enable_thinking": false}
370 }'
371 ```
372
373 vLLM OpenAI client:
374
375 ```py
376 response = client.chat.completions.create(model=model, messages=messages, extra_body={"chat_template_kwargs": {"enable_thinking": False}})
377 ```
378
379 ### Use it with TRT-LLM
380
381 For more detailed information on how to use the model with TRT-LLM, please see [this cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-3-Nano/trtllm\_cookbook.ipynb).
382
383 ```
384 # nano_v3 example yaml is https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/auto_deploy/nano_v3.yaml
385 trtllm-serve <model_path> \
386 --backend _autodeploy \
387 --trust_remote_code \
388 --reasoning_parser nano-v3 \
389 --tool_parser qwen3_coder \
390 --extra_llm_api_options nano_v3.yaml
391 ```
392
393 ### Use it with SGLang
394
395 For more detailed information on how to use the model with SGLang, please see [this cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-3-Nano/sglang\_cookbook.ipynb).
396
397 ```
398 python3 -m sglang.launch_server --model-path nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 \
399 --trust-remote-code \
400 --tp 1 \
401 --attention-backend flashinfer \
402 --tool-call-parser qwen3_coder \
403 --reasoning-parser nano_v3
404 ```
405
406 #### Using Budget Control
407
408 The thinking budget allows developers to keep accuracy high and meet response‑time targets \- which is especially crucial for customer support, autonomous agent steps, and edge devices where every millisecond counts.
409
410 With budget control, you can set a limit for internal reasoning:
411
412 * `reasoning_budget`: This is a threshold that will attempt to end the reasoning trace at the next newline encountered in the reasoning trace. If no newline is encountered within 500 tokens, it will abruptly end the reasoning trace at `reasoning_budget + 500`.
413
414 > NOTE: This client will work with any OpenAI API compatible endpoint.
415
416 Client for supporting budget control:
417
418 ```py
419 from typing import Any, Dict, List
420
421 import openai
422 from transformers import AutoTokenizer
423
424
425 class ThinkingBudgetClient:
426 def __init__(self, base_url: str, api_key: str, tokenizer_name_or_path: str):
427 self.base_url = base_url
428 self.api_key = api_key
429 self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path)
430 self.client = openai.OpenAI(base_url=self.base_url, api_key=self.api_key)
431
432
433 def chat_completion(
434 self,
435 model: str,
436 messages: List[Dict[str, Any]],
437 reasoning_budget: int = 512,
438 max_tokens: int = 1024,
439 **kwargs,
440 ) -> Dict[str, Any]:
441 assert (
442 max_tokens > reasoning_budget
443 ), f"thinking budget must be smaller than maximum new tokens. Given {max_tokens=} and {reasoning_budget=}"
444
445
446 # 1. first call chat completion to get reasoning content
447 response = self.client.chat.completions.create(
448 model=model, messages=messages, max_tokens=reasoning_budget, **kwargs
449 )
450 content = response.choices[0].message.content
451
452
453 reasoning_content = content
454 if not "</think>" in reasoning_content:
455 # reasoning content is too long, closed with a period (.)
456 reasoning_content = f"{reasoning_content}.\n</think>\n\n"
457 reasoning_tokens_len = len(
458 self.tokenizer.encode(reasoning_content, add_special_tokens=False)
459 )
460 remaining_tokens = max_tokens - reasoning_tokens_len
461 assert (
462 remaining_tokens > 0
463 ), f"remaining tokens must be positive. Given {remaining_tokens=}. Increase the max_tokens or lower the reasoning_budget."
464
465
466 # 2. append reasoning content to messages and call completion
467 messages.append({"role": "assistant", "content": reasoning_content})
468 prompt = self.tokenizer.apply_chat_template(
469 messages,
470 tokenize=False,
471 continue_final_message=True,
472 )
473 response = self.client.completions.create(
474 model=model, prompt=prompt, max_tokens=remaining_tokens, **kwargs
475 )
476
477
478 response_data = {
479 "reasoning_content": reasoning_content.strip().strip("</think>").strip(),
480 "content": response.choices[0].text,
481 "finish_reason": response.choices[0].finish_reason,
482 }
483 return response_data
484 ```
485
486 Calling the server with a budget (Restricted to 32 tokens here as an example)
487
488 ```py
489 tokenizer_name_or_path = "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16"
490 client = ThinkingBudgetClient(
491 base_url="http://localhost:8000/v1", # Nemotron 3 Nano deployed in thinking mode
492 api_key="EMPTY",
493 tokenizer_name_or_path=tokenizer_name_or_path,
494 )
495
496
497 result = client.chat_completion(
498 model="nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16",
499 messages=[
500 {"role": "system", "content": "You are a helpful assistant. /think"},
501 {"role": "user", "content": "What is 2+2?"},
502 ],
503 reasoning_budget=32,
504 max_tokens=512,
505 temperature=1.0,
506 top_p=1.0,
507 )
508 print(result)
509 ```
510
511 You should see output similar to the following:
512
513 ```
514 {'reasoning_content': "Okay, the user asked, What is 2+2? Let me think. Well, 2 plus 2 equals 4. That's a basic.", 'content': '2 + 2 equals **4**.\n', 'finish_reason': 'stop'}
515 ```
516
517 ## Model Version(s)
518
519 - v1.0
520
521 # Training, Testing, and Evaluation Datasets
522
523 **Data Modality:** Text
524 **The total size:** 10,648,823,153,919 Tokens
525 **Total number of datasets:** 141
526 **Dataset partition:** *Training \[100%\], testing \[0%\], validation \[0%\]*
527 **Time period for training data collection:** 2013 to May 1, 2025
528 **Time period for testing data collection:** 2013 to May 1, 2025
529 **Time period for validation data collection:** 2013 to May 1, 2025
530 **Data Collection Method by dataset:** Hybrid: Automated, Human, Synthetic
531 **Labeling Method by dataset: Hybrid:** Automated, Human, Synthetic
532
533 NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 is pre-trained on a large corpus of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 19 other languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was trained for approximately 25 trillion tokens.
534
535 The post-training corpus for NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 of high-quality curated and synthetically-generated data. Primary languages used for post-training include English, German, Spanish, French, Italian, and Japanese.
536
537 These datasets, such as FinePDFs, EssentialWeb, HotpotQA, SQuAD, and HelpSteer3, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of demographic classes such as age, gender, or ethnicity in 64-99% of samples, depending on the source. In the subset where such terms are present, document-based datasets (FinePDFs and EssentialWeb) contain representational skews, such as references to "male" outnumbering those to "female", and mentions of "White" as the most frequent among ethnic identifiers (comprising 43-44% of ethnicity mentions). To mitigate these imbalances, we recommend considering evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies like counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy.
538
539 During post-training, we generate synthetic data by distilling trajectories, solutions, and translations from strong teacher models and agent systems, often grounded in real tasks or documents and aggressively filtered for quality. For math, code, and science, we start from curated problem sets and use open source permissive models such as GPT-OSS-120B to produce step-by-step reasoning traces, candidate solutions, best-of-n selection traces, and verified CUDA kernels. For long-context and science, we build synthetic QA and reasoning data by retrieving passages from long documents, generating MCQ/OpenQA questions and answers, and paraphrasing them into multiple prompt/response formats to ensure diversity. Across all pipelines we stack automated verification—compilers, numerical checks, language identification—to ensure our data is high quality.
540
541 For all domains, we apply a unified data filtering pipeline to ensure that only high-quality, license-compliant, and verifiable samples are used for post-training. We first discard malformed examples using structural checks (e.g., missing tool definitions when tool calls are present). We then aggressively filter reasoning traces exhibiting pathological repetition, such as repeated n-grams within a sliding window or across the entire trajectory, which we found to be a strong indicator of malformed or low-quality reasoning. Finally, based on internal audits of synthetically generated datasets, we observed that some teacher models occasionally produce reasoning traces and final responses that implicitly align with specific political entities or promote nationalistic narratives. To mitigate this, we apply targeted keyword- and regex-based filters and remove all trajectories matching such behavior.
542
543 Alongside the model, we release our final [pre-training](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) and [post-training](https://huggingface.co/collections/nvidia/nemotron-post-training-v3) data, as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes.
544
545 More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://arxiv.org/abs/2512.20848).
546
547 | Dataset | Collection Period |
548 | :---- | :---- |
549 | [GSM8K](https://github.com/openai/grade-school-math) | 4/23/2025 |
550 | [CC-NEWS](https://commoncrawl.org/blog/news-dataset-available) | 4/23/2025 |
551 | [Common Crawl](https://commoncrawl.org/) | 4/23/2025 |
552 | [Wikimedia](https://dumps.wikimedia.org/) | 4/23/2025 |
553 | [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k) | 4/23/2025 |
554 | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k) | 4/23/2025 |
555 | [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) | 4/23/2025 |
556 | [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | 4/23/2025 |
557 | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | 4/23/2025 |
558 | [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) and [OpenStax \- CC BY-SA subset](https://openstax.org/) | 4/23/2025 |
559 | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb), [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k), [PRM800K](https://github.com/openai/prm800k), and [SciBench](https://github.com/mandyyyyii/scibench) | 4/23/2025 |
560 | [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) | 4/23/2025 |
561 | [Court Listener](https://www.courtlistener.com/help/api/bulk-data/) | Legacy Download |
562 | [peS2o](https://huggingface.co/datasets/allenai/peS2o) | Legacy Download |
563 | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) | Legacy Download |
564 | [BioRxiv](https://www.biorxiv.org/tdm) | Legacy Download |
565 | [PMC Open Access Subset](https://pmc.ncbi.nlm.nih.gov/tools/openftlist/) | Legacy Download |
566 | [OpenWebText2](https://openwebtext2.readthedocs.io/en/latest/) | Legacy Download |
567 | [Stack Exchange Data Dump](https://archive.org/details/stackexchange) | Legacy Download |
568 | [PubMed Abstracts](https://github.com/thoppe/The-Pile-PubMed) | Legacy Download |
569 | [NIH ExPorter](https://exporter.nih.gov/ExPORTER_Catalog.aspx) | Legacy Download |
570 | [arXiv](https://info.arxiv.org/help/bulk_data/index.html) | Legacy Download |
571 | [BigScience Workshop Datasets](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#datasets) | Legacy Download |
572 | [Reddit Dataset](https://files.pushshift.io/reddit/) | Legacy Download |
573 | [SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/search-filings) | Legacy Download |
574 | [Advanced Mathematical Problem Solving](https://github.com/hendrycks/math?tab=readme-ov-file) | Legacy Download |
575 | [MathPile](https://github.com/GAIR-NLP/MathPile/) | Legacy Download |
576 | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | Legacy Download |
577 | [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/) | Legacy Download |
578 | [FLAN](https://github.com/google-research/FLAN) | Legacy Download |
579 | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb) | Legacy Download |
580 | [SciBench](https://github.com/mandyyyyii/scibench) | Legacy Download |
581 | [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) | Legacy Download |
582 | [FinQA](https://finqasite.github.io/) | Legacy Download |
583 | [Riddles](https://github.com/crawsome/riddles) | Legacy Download |
584 | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | Legacy Download |
585 | [MedMCQA](https://huggingface.co/datasets/openlifescienceai/medmcqa) | Legacy Download |
586 | [Cosmos QA](https://huggingface.co/datasets/allenai/cosmos_qa) | Legacy Download |
587 | [MCTest](https://huggingface.co/datasets/sagnikrayc/mctest) | Legacy Download |
588 | [AI2's Reasoning Challenge](https://huggingface.co/datasets/ai2_arc) | Legacy Download |
589 | [OpenBookQA](https://github.com/allenai/OpenBookQA) | Legacy Download |
590 | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | Legacy Download |
591 | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101) | Legacy Download |
592 | [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | Legacy Download |
593 | [The Common Pile v0.1](https://huggingface.co/common-pile) | Legacy Download |
594 | [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | Legacy Download |
595 | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
596 | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
597 | [MultiverseMathHard](https://huggingface.co/datasets/Nexusflow/MultiverseMathHard) | 10/2/2025 |
598 | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym) | 10/2/2025 |
599 | [WorkBench](https://github.com/olly-styles/WorkBench/tree/main/data/raw) | 10/2/2025 |
600 | [WildChat-1M](https://huggingface.co/datasets/allenai/WildChat-1M) | 10/2/2025 |
601 | [OpenCodeReasoning-2](https://huggingface.co/datasets/nvidia/OpenCodeReasoning-2) | 10/2/2025 |
602 | [HelpSteer3](https://huggingface.co/datasets/nvidia/HelpSteer3) | 10/2/2025 |
603 | [opc-sft-stage2](https://huggingface.co/datasets/OpenCoder-LLM/opc-sft-stage2) | 10/2/2025 |
604 | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified) | 10/2/2025 |
605 | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | 10/2/2025 |
606 | [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) | 10/2/2025 |
607 | [simple-arithmetic-problems](https://huggingface.co/datasets/garrethlee/simple-arithmetic-problems) | 10/2/2025 |
608 | [arithmetic](https://huggingface.co/datasets/EleutherAI/arithmetic) | 10/2/2025 |
609 | [Skywork-OR1-RL-Data](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data) | 10/2/2025 |
610 | [News Commentary](https://opus.nlpl.eu/News-Commentary.php) | 10/2/2025 |
611 | [FastChat](https://github.com/lm-sys/FastChat/blob/main/playground/data/dummy.json) | 10/2/2025 |
612 | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | 10/2/2025 |
613 | [finepdfs](https://huggingface.co/datasets/HuggingFaceFW/finepdfs) | 10/2/2025 |
614 | [HotpotQA](https://huggingface.co/hotpot_qa/datasets) | 10/2/2025 |
615 | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | 10/2/2025 |
616 | [NLTK Words Lists](https://www.nltk.org/nltk_data/) | 10/2/2025 |
617
618 ## Private Non-publicly Accessible Datasets of Third Parties
619
620 | Dataset |
621 | :---- |
622 | Global Regulation |
623 | TAUS Translation Memory |
624 | Scale HLE |
625 | HackerRank Coding |
626
627 ## Private Non-publicly Accessible Datasets by NVIDIA
628
629 | Dataset |
630 | :---- |
631 | Simple Minesweeper |
632 | Simple Sudoku |
633 | Multitool Typewriter Hard |
634 | Machine Translation of News Commentary and TAUS Translation Memory |
635 | Machine Translation of STEM data using [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) |
636
637 ## Crawled and Scraped from Online Sources by NVIDIA
638
639 The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC.
640
641 The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the [technical report](https://arxiv.org/abs/2512.20848)).
642
643 | Dataset | Modality | Dataset Size | Collection Period | Collecting Organisation |
644 | :---- | :---- | :---- | :---- | :---- |
645 | English Common Crawl | Text | 3.36T | 4/8/2025 | NVIDIA Advanced Deep Learning Research |
646 | English Common Crawl 1.1 | Text | Not disclosed | 10/2/2025 | NVIDIA Advanced Deep Learning Research |
647 | Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | NVIDIA Advanced Deep Learning Research |
648 | GitHub Crawl | Text | 747.4B | 4/29/2025 | NVIDIA Advanced Deep Learning Research |
649
650 ## NVIDIA-Sourced Synthetic Datasets
651
652 | Dataset | Modality | Dataset Size | Seed Dataset | Model(s) used for generation |
653 | :---- | :---- | :---- | :---- | :---- |
654 | Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 40B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
655 | Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101); [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) |
656 | Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
657 | Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
658 | Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | [OpenStax \- CC BY-SA subset](https://openstax.org/); [GSM8K](https://github.com/openai/grade-school-math); [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
659 | [Nemotron-PrismMath](https://huggingface.co/datasets/nvidia/Nemotron-PrismMath) | Text | 4.6B | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified); [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) | [Qwen2.5-0.5B-instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct); [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
660 | Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
661 | Refreshed [Nemotron-MIND](https://huggingface.co/datasets/nvidia/Nemotron-MIND) from phi-4 | Text | 73B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
662 | Nemotron-CC-Math-4plus | Text | 52.3B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
663 | Nemotron-CC-Math-3 | Text | 80.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
664 | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) |
665 | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
666 | Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k) | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B); [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B); [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
667 | Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
668 | Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
669 | Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 415.8B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) |
670 | Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
671 | Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
672 | Synthetic Math Data from Wikimedia from Nemotron-4-340B-Instruct | Text | | \- | [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct) |
673 | Synthetic Common Crawl Code from phi-4 | Text | 427.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
674 | Synthetic Scientific Coding from Qwen3-235B-A22B | Text | 1.2B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
675 | Tool Calling Data | Text | 26.2B | | [Qwen3-235B-A22B-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
676 | Synthetic Essential-Web from QwQ-32B | Text | 28.1B | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) |
677 | Translated Synthetic Crawl | Text | 389.9B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
678 | Translated Synthetic Wikipedia | Text | 7.9B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
679 | Synthetic Art of Problem Solving from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
680 | Synthetic Stack Exchange from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
681 | Synthetic OpenCodeReasoning from DeepSeek-R1-0528 | Text | Undisclosed | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
682 | Synthetic HackerRank Coding from DeepSeek-R1-0528 | Text | Undisclosed | HackerRank Coding Dataset | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
683 | Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) |
684 | Synthetic Art of Problem Solving and Stack Exchange from gpt-oss-120b, Qwen2.5-32B-Instruct, and Goedel-Prover-V2-32B | Text | Undisclosed | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Goedel-Prover-V2-32B](https://huggingface.co/Goedel-LM/Goedel-Prover-V2-32B) |
685 | Synthetic Multilingual Science and Code data from DeepSeek-R1, DeepSeek-R1-0528, Qwen2.5-32B-Instruct, and Qwen3-235B-A22B, translated with Qwen2.5-32B-Instruct and Qwen2.5-14B-Instruct | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange); [SCP-116K](https://huggingface.co/datasets/EricLu/SCP-116K); [LIMO](https://huggingface.co/datasets/GAIR/LIMO); [TACO](https://huggingface.co/datasets/BAAI/TACO); Code Contest; Codeforces | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); |
686 | Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b and Mixtral-8x7B-v0.1 | Text | Undisclosed | [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0); [Gretel Synthetic Safety Alignment Dataset](https://huggingface.co/datasets/gretelai/gretel-safety-alignment-en-v1); [RedTeam-2K](https://huggingface.co/datasets/JailbreakV-28K/JailBreakV-28k); [Malicious Tasks](https://github.com/CrystalEye42/eval-safety/blob/main/malicious_tasks_dataset.yaml); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) |
687 | Synthetic STEM from Qwen3-235B-A22B-Instruct-2507 and gpt-oss-120b | Text | Undisclosed | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
688 | Synthetic KernelBook from DeepSeek-R1-0528 | Text | Undisclosed | [KernelBook](https://huggingface.co/datasets/GPUMODE/KernelBook) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
689 | Synthetic Tool Calling from Qwen3-235B-A22B-Thinking-2507 and Qwen3-Next-80B-A3B-Thinking | Text | Undisclosed | [ToolBench](https://github.com/OpenBMB/ToolBench/tree/master); [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2); [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507); [Qwen3-Next-80B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking) |
690 | Synthetic Chat from gpt-oss-120b, Mixtral-8x22B-Instruct-v0.1, Qwen3-235B-A22B-Instruct-2507 , and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [C4](https://huggingface.co/datasets/allenai/c4); [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m); [ShareGPT](https://huggingface.co/datasets/RyokoAI/ShareGPT52K); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k); [FinQA](https://finqasite.github.io/); [WikiTableQuestions](https://huggingface.co/wikitablequestions/datasets); [Riddles](https://github.com/crawsome/riddles); [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2); [SciBench](https://huggingface.co/datasets/xw27/scibench); [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k); [OpenBookQA](https://github.com/allenai/OpenBookQA); [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb); Software Heritage; [Khan Academy Math Keywords](https://www.khanacademy.org/math); [WildChat-1M](https://huggingface.co/datasets/allenai/WildChat-1M); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
691 | Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
692 | Synthetic Tool Use Interactive Agent from gpt-oss-120b, DeepSeek-R1-0528, Qwen3-32B, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | NVIDIA Internal | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B); and [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
693 | Synthetic STEM from Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [ICHO-IPH0](https://huggingface.co/datasets/II-Vietnam/IChO-IPhO-RL-v2-formated); [Physics Big](https://huggingface.co/datasets/Vikhrmodels/physics_big); Scale HLE; [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
694 | Synthetic DocFinQA and SWE-smith from Qwen3-Coder-480B-A35B-Instruct and Kimi-K2-Thinking | Text | Undisclosed | [DocFinQA](https://huggingface.co/datasets/kensho/DocFinQA); [SWE-smith](https://huggingface.co/datasets/SWE-bench/SWE-smith) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct); [Kimi-K2-Thinking](https://huggingface.co/moonshotai/Kimi-K2-Thinking) |
695 | Synthetic Math from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | \- | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
696 | Synthetic Essential-Web from gpt-oss-120b | Text | Undisclosed | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
697 | Synthetic Scale HLE from gpt-oss-120b | Text | Undisclosed | Scale HLE | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
698 | Synthetic CDQuestions from gpt-oss-120b | Text | Undisclosed | [CDQuestions](https://cdquestions.com/) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
699 | Synthetic Stack Exchange from gpt-oss-120b | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
700 | Synthetic GPQA from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
701 | Synthetic Vedantu from gpt-oss-120b | Text | Undisclosed | [Vedantu](https://www.vedantu.com/) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
702 | Synthetic SWE-Gym and R2E-Gym-Subset from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym); [R2E-Gym-Subset](https://huggingface.co/datasets/R2E-Gym/R2E-Gym-Subset) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) |
703 | Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) |
704 | Synthetic SWE-Gym and R2E-Gym-Subset from DeepSeek-R1-0528 | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym); [R2E-Gym-Subset](https://huggingface.co/datasets/R2E-Gym/R2E-Gym-Subset) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
705 | Synthetic HelpSteer, LMSYS-Chat-1M, and Nemotron-Personas-USA from gpt-oss-120b, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2); [HelpSteer3](https://huggingface.co/datasets/nvidia/HelpSteer3); [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
706 | Synthetic Structured Outputs from Qwen3-30B-A3B-Instruct-2507, Qwen3-30B-A3B-Thinking-2507, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | \- | [Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507); [Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
707 | Synthetic Search STEM MCQ from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
708 | Synthetic Search STEM OPENQ from DeepSeek-R1-0528 | Text | Undisclosed | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
709 | Synthetic OpenSTEM from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
710 | Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
711 | Synthetic MCQ10 from DeepSeek-R1-0528 | Text | Undisclosed | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
712 | Synthetic MCQ4 from Qwen3-235B-A22B, DeepSeek-R1-0528, and Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | \- | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
713 | Synthetic OpenMathReasoning from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
714 | Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528 | Text | Undisclosed | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
715 | Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
716 | Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528 | Text | Undisclosed | \- | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B); [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503); [Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506); [MiniMax-M1-80k](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k); [MiniMax-M1-40k](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k); [Kimi-K2-Instruct](https://huggingface.co/moonshotai/Kimi-K2-Instruct); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
717 | Synthetic WildChat-1M and arena-human-preference-140k from DeepSeek-R1, gemma-2-2b-it, gemma-3-27b-it, gpt-oss-20b, gpt-oss-120b, Mistral-7B-Instruct-v0.3, Mixtral-8x22B-Instruct-v0.1, Nemotron-4-340B-Instruct, NVIDIA-Nemotron-Nano-9B-v2, Phi-4-mini-instruct, Phi-3-small-8k-instruct, Phi-3-medium-4k-instruct, Qwen3-235B-A22B, QwQ-32B | Text | Undisclosed | [WildChat-1M](https://huggingface.co/datasets/allenai/WildChat-1M); [arena-human-preference-140k](https://huggingface.co/datasets/lmarena-ai/arena-human-preference-140k) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it); [gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it); [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3); [Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1); [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct); [NVIDIA-Nemotron-Nano-9B-v2](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2); [Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct); [Phi-3-small-8k-instruct](https://huggingface.co/microsoft/Phi-3-small-8k-instruct); [Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) |
718 | Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b, DeepSeek-R1-Distill-Qwen-7B, and Mixtral-8x7B-v0.1 | Text | Undisclosed | [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0); [Gretel Synthetic Safety Alignment Dataset](https://huggingface.co/datasets/gretelai/gretel-safety-alignment-en-v1); [RedTeam-2K](https://huggingface.co/datasets/JailbreakV-28K/JailBreakV-28k); [Malicious Tasks](https://github.com/CrystalEye42/eval-safety/blob/main/malicious_tasks_dataset.yaml); | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B); [Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) |
719 | Synthetic Code from Qwen3-32B | Text | Undisclosed | English Common Crawl; English Common Crawl 1.1 | [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) |
720 | Synthetic OpenCodeReasoning from DeepSeek-R1 | Text | Undisclosed | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
721 | Synthetic LIMO from DeepSeek-R1-0528 | Text | Undisclosed | [LIMO](https://huggingface.co/datasets/GAIR/LIMO) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
722 | Synthetic SCP from DeepSeek-R1-0528 | Text | Undisclosed | [SCP-116K](https://huggingface.co/datasets/EricLu/SCP-116K) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
723 | Synthetic Stack Exchange from DeepSeek-R1-0528 | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
724 | Synthetic Common Crawl from Qwen3-30B-A3B | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
725 | Synthetic Wikipedia from Qwen3-30B-A3B | Text | Undisclosed | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
726 | Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
727 | Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4 | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/); [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [phi-4](https://huggingface.co/microsoft/phi-4) |
728 | Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528 | Text | Undisclosed | [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K); [opc-sft-stage2](https://huggingface.co/datasets/OpenCoder-LLM/opc-sft-stage2); [TACO](https://huggingface.co/datasets/BAAI/TACO); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning); [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
729 | Synthetic Nemotron-Personas-USA from gpt-oss-120b and Qwen3-8B | Text | Undisclosed | [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) |
730
731 ## Training Dataset
732
733 | Dataset | \# of Tokens in Nemotron Nano 2 | \# of Tokens in Nemotron 3 Nano |
734 | :---- | :---- | :---- |
735 | English Common Crawl | 3,360,110,334,818 | 3,456,523,212,210 |
736 | English Synthetic CC | 1,949,464,641,123 | 4,340,740,677,920 |
737 | Crawl++ | 360,389,153,262 | 360,389,153,262 |
738 | Math | 124,606,230,663 | 154,217,502,165 |
739 | Synthetic Math | 73,007,767,155 | 73,007,767,155 |
740 | Code | 747,409,228,724 | 1,043,856,922,136 |
741 | Synthetic Code | 175,067,553,293 | 453,117,917,176 |
742 | Common Crawl Code | 0 | 263,072,374,097 |
743 | English Wiki | 17,349,266,926 | 17,349,266,926 |
744 | Synthetic Wiki | 0 | 7,850,648,552 |
745 | Books | 0 | 0 |
746 | Papers | 191,586,493,365 | 191,586,493,365 |
747 | PDF-to-text | 141,096,578,533 | 141,096,578,533 |
748 | Code SFT | 60,025,726,817 | 102,863,752,325 |
749 | STEM SFT | 272,680,426,295 | 359,826,214,274 |
750 | General SFT | 6,057,478,645 | 6,057,478,645 |
751 | Tool-Calling SFT | 0 | 26,244,716,867 |
752 | Multilingual | 2,172,261,909,350 | 1,743,892,490,859 |
753 | Synthetic multilingual | 997,710,364,950 | 595,140,661,135 |
754 | **Total** | **10,648,823,153,919** | **13,336,833,827,602** |
755
756 We use a considerable amount of synthetic data. Out of 10.6 trillion tokens, 3,534,013,958,278 tokens are synthetically generated.
757
758 We extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. Additionally, we used data from Wikipedia and FineWeb-2 (Penedo et al., 2025\) for these fifteen languages as well as four additional languages: Czech, Finnish, Hebrew, and Hindi.
759
760 | Language | Total Tokens |
761 | :---- | :---- |
762 | Arabic | 118,056,362,726 |
763 | Danish | 117,747,321,618 |
764 | German | 146,613,691,781 |
765 | Spanish | 469,156,575,409 |
766 | French | 139,982,002,289 |
767 | Italian | 298,858,370,174 |
768 | Japanese | 682,755,693,336 |
769 | Korean | 127,099,747,538 |
770 | Dutch | 89,041,592,681 |
771 | Polish | 105,356,493,147 |
772 | Portuguese | 243,249,275,089 |
773 | Russian | 185,314,014,057 |
774 | Swedish | 74,954,953,299 |
775 | Thai | 160,778,944,467 |
776 | Chinese | 211,007,236,689 |
777
778 We collect a total of 922,476,782,017 tokens of code in 43 different languages.
779
780 | Language | Tokens |
781 | :---- | :---- |
782 | Assembly | 750,628,764 |
783 | C | 42,657,300,868 |
784 | C\# | 56,153,329,307 |
785 | C++ | 67,773,701,658 |
786 | CommonLisp | 263,234,672 |
787 | CSS | 38,848,760,035 |
788 | Cuda | 400,222,993 |
789 | Dart | 3,816,960,470 |
790 | Dockerfile | 474,958,084 |
791 | Fortran | 1,105,049,387 |
792 | Go | 8,332,419,480 |
793 | Haskell | 1,294,613,669 |
794 | HTML | 69,082,117,487 |
795 | Java | 131,440,465,822 |
796 | JavaScript | 75,573,420,861 |
797 | JSON | 15,366,881,241 |
798 | Julia | 621,046,949 |
799 | JupyterNotebook | 2,241,893,197 |
800 | Lua | 4,146,420,802 |
801 | Makefile | 12,640,010,879 |
802 | Markdown | 64,796,743,311 |
803 | Mathematica | 320,504,225 |
804 | OmniversePython | 26,946,093 |
805 | Pascal | 1,625,013,876 |
806 | Perl | 1,575,314,434 |
807 | PHP | 61,575,339,005 |
808 | Python | 126,916,727,384 |
809 | R | 19,811,381,935 |
810 | reStructuredText | 1,779,876,391 |
811 | Ruby | 6,446,962,615 |
812 | Rust | 4,438,640,533 |
813 | Scala | 3,343,959,154 |
814 | Shell | 18,758,779,250 |
815 | SQL | 23,205,633,085 |
816 | Swift | 5,976,714,881 |
817 | SystemVerilog | 233,056,185 |
818 | TeX | 7,347,157,527 |
819 | TypeScript | 15,657,838,582 |
820 | Verilog | 811,884,369 |
821 | VHDL | 648,401,444 |
822 | VisualBasic.NET | 1,005,680,881 |
823 | XML | 12,616,779,741 |
824 | YAML | 10,574,010,491 |
825
826 ## Language Distribution in Post-Training
827
828 For our post-training recipe, we focused on 5 main languages in addition to English: Spanish, French, Japanese, Italian, German.
829 Those languages were represented in the form of multilingual reasoning and translation task.
830
831 The following table depicts our sample distribution for the 6 languages and 5 translation pairs.
832
833 | Language | Size |
834 | :---- | :---- |
835 | English | 16.2 M |
836 | Italian | 0.252M |
837 | German | 0.252M |
838 | Spanish | 0.252M |
839 | French | 0.252M |
840 | Japanese | 0.252M |
841 | English \<-\> Italian | 108k |
842 | English \<-\> German | 108k |
843 | English \<-\> Spanish | 108k |
844 | English \<-\> French | 108k |
845 | English \<-\> Japanese | 108k |
846
847 ## Evaluation Dataset
848
849 * Data Collection Method by dataset: Hybrid: Human, Synthetic
850 * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic
851
852 ## Inference
853
854 - Engines: HF, vLLM, TRT-LLM, SGLang, Llama.cpp
855 - Test Hardware: NVIDIA A100 80GB, H100 80GB, B200 192GB, RTX PRO 6000 96GB, Jetson Thor, DGX Spark
856
857
858 ## Ethical Considerations
859
860 NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our [Trustworthy AI terms of service](https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/), developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
861
862 We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case.For more details: [Safety](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/safety.md) and [Explainability](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/explainability.md) Subcards.
863
864 For more detailed information on ethical considerations for this model, please see the Model Card++ [Bias](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/bias.md), and [Privacy](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/privacy.md) Subcards.
865
866 Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
867
868 ## Citation
869
870 ```
871
872 @misc{nvidia_nemotron_nano_v3_2025,
873 title = {{Nemotron 3 Nano}: Open, Efficient Mixture-of-Experts Hybrid {Mamba}-{Transformer} Model for {Agentic} Reasoning},
874 author = {{NVIDIA}},
875 year = {2025},
876 url = {https://arxiv.org/abs/2512.20848},
877 note = {Technical report}
878 }
879 ```