README.md
| 1 | --- |
| 2 | license: apache-2.0 |
| 3 | library_name: transformers |
| 4 | --- |
| 5 | # Qwen3-4B-Base |
| 6 | |
| 7 | ## Qwen3 Highlights |
| 8 | |
| 9 | Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. |
| 10 | Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5: |
| 11 | |
| 12 | - **Expanded Higher-Quality Pre-training Corpus:** Qwen3 is pre-trained on 36 trillion tokens across 119 languages — tripling the language coverage of Qwen2.5 — with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data. |
| 13 | - **Training Techniques and Model Architecture:** Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance. |
| 14 | - **Three-stage Pre-training:** Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens. |
| 15 | - **Scaling Law Guided Hyperparameter Tuning:** Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters — such as learning rate scheduler and batch size — separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales. |
| 16 | |
| 17 | ## Model Overview |
| 18 | |
| 19 | **Qwen3-4B-Base** has the following features: |
| 20 | - Type: Causal Language Models |
| 21 | - Training Stage: Pretraining |
| 22 | - Number of Parameters: 4.0B |
| 23 | - Number of Paramaters (Non-Embedding): 3.6B |
| 24 | - Number of Layers: 36 |
| 25 | - Number of Attention Heads (GQA): 32 for Q and 8 for KV |
| 26 | - Context Length: 32,768 |
| 27 | |
| 28 | For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). |
| 29 | |
| 30 | ## Requirements |
| 31 | |
| 32 | The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. |
| 33 | |
| 34 | With `transformers<4.51.0`, you will encounter the following error: |
| 35 | ``` |
| 36 | KeyError: 'qwen3' |
| 37 | ``` |
| 38 | |
| 39 | ## Evaluation & Performance |
| 40 | |
| 41 | Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen3/). |
| 42 | |
| 43 | ### Citation |
| 44 | |
| 45 | If you find our work helpful, feel free to give us a cite. |
| 46 | |
| 47 | ``` |
| 48 | @misc{qwen3technicalreport, |
| 49 | title={Qwen3 Technical Report}, |
| 50 | author={Qwen Team}, |
| 51 | year={2025}, |
| 52 | eprint={2505.09388}, |
| 53 | archivePrefix={arXiv}, |
| 54 | primaryClass={cs.CL}, |
| 55 | url={https://arxiv.org/abs/2505.09388}, |
| 56 | } |
| 57 | ``` |