README.md
| 1 | --- |
| 2 | license: apache-2.0 |
| 3 | language: |
| 4 | - en |
| 5 | base_model: |
| 6 | - yl4579/StyleTTS2-LJSpeech |
| 7 | pipeline_tag: text-to-speech |
| 8 | --- |
| 9 | **Kokoro** is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, Kokoro can be deployed anywhere from production environments to personal projects. |
| 10 | |
| 11 | <audio controls><source src="https://huggingface.co/hexgrad/Kokoro-82M/resolve/main/samples/HEARME.wav" type="audio/wav"></audio> |
| 12 | |
| 13 | 🐈 **GitHub**: https://github.com/hexgrad/kokoro |
| 14 | |
| 15 | 🚀 **Demo**: https://hf.co/spaces/hexgrad/Kokoro-TTS |
| 16 | |
| 17 | > [!NOTE] |
| 18 | > As of April 2025, the market rate of Kokoro served over API is **under $1 per million characters of text input**, or under $0.06 per hour of audio output. (On average, 1000 characters of input is about 1 minute of output.) Sources: [ArtificialAnalysis/Replicate at 65 cents per M chars](https://artificialanalysis.ai/text-to-speech/model-family/kokoro#price) and [DeepInfra at 80 cents per M chars](https://deepinfra.com/hexgrad/Kokoro-82M). |
| 19 | > |
| 20 | > This is an Apache-licensed model, and Kokoro has been deployed in numerous projects and commercial APIs. We welcome the deployment of the model in real use cases. |
| 21 | |
| 22 | > [!CAUTION] |
| 23 | > Fake websites like kokorottsai_com (snapshot: https://archive.ph/nRRnk) and kokorotts_net (snapshot: https://archive.ph/60opa) are likely scams masquerading under the banner of a popular model. |
| 24 | > |
| 25 | > Any website containing "kokoro" in its root domain (e.g. kokorottsai_com, kokorotts_net) is **NOT owned by and NOT affiliated with this model page or its author**, and attempts to imply otherwise are red flags. |
| 26 | |
| 27 | - [Releases](#releases) |
| 28 | - [Usage](#usage) |
| 29 | - [EVAL.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/EVAL.md) ↗️ |
| 30 | - [SAMPLES.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/SAMPLES.md) ↗️ |
| 31 | - [VOICES.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) ↗️ |
| 32 | - [Model Facts](#model-facts) |
| 33 | - [Training Details](#training-details) |
| 34 | - [Creative Commons Attribution](#creative-commons-attribution) |
| 35 | - [Acknowledgements](#acknowledgements) |
| 36 | |
| 37 | ### Releases |
| 38 | |
| 39 | | Model | Published | Training Data | Langs & Voices | SHA256 | |
| 40 | | ----- | --------- | ------------- | -------------- | ------ | |
| 41 | | **v1.0** | **2025 Jan 27** | **Few hundred hrs** | [**8 & 54**](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) | `496dba11` | |
| 42 | | [v0.19](https://huggingface.co/hexgrad/kLegacy/tree/main/v0.19) | 2024 Dec 25 | <100 hrs | 1 & 10 | `3b0c392f` | |
| 43 | |
| 44 | | Training Costs | v0.19 | v1.0 | **Total** | |
| 45 | | -------------- | ----- | ---- | ----- | |
| 46 | | in A100 80GB GPU hours | 500 | 500 | **1000** | |
| 47 | | average hourly rate | $0.80/h | $1.20/h | **$1/h** | |
| 48 | | in USD | $400 | $600 | **$1000** | |
| 49 | |
| 50 | ### Usage |
| 51 | You can run this basic cell on [Google Colab](https://colab.research.google.com/). [Listen to samples](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/SAMPLES.md). For more languages and details, see [Advanced Usage](https://github.com/hexgrad/kokoro?tab=readme-ov-file#advanced-usage). |
| 52 | ```py |
| 53 | !pip install -q kokoro>=0.9.2 soundfile |
| 54 | !apt-get -qq -y install espeak-ng > /dev/null 2>&1 |
| 55 | from kokoro import KPipeline |
| 56 | from IPython.display import display, Audio |
| 57 | import soundfile as sf |
| 58 | import torch |
| 59 | pipeline = KPipeline(lang_code='a') |
| 60 | text = ''' |
| 61 | [Kokoro](/kˈOkəɹO/) is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, [Kokoro](/kˈOkəɹO/) can be deployed anywhere from production environments to personal projects. |
| 62 | ''' |
| 63 | generator = pipeline(text, voice='af_heart') |
| 64 | for i, (gs, ps, audio) in enumerate(generator): |
| 65 | print(i, gs, ps) |
| 66 | display(Audio(data=audio, rate=24000, autoplay=i==0)) |
| 67 | sf.write(f'{i}.wav', audio, 24000) |
| 68 | ``` |
| 69 | Under the hood, `kokoro` uses [`misaki`](https://pypi.org/project/misaki/), a G2P library at https://github.com/hexgrad/misaki |
| 70 | |
| 71 | ### Model Facts |
| 72 | |
| 73 | **Architecture:** |
| 74 | - StyleTTS 2: https://arxiv.org/abs/2306.07691 |
| 75 | - ISTFTNet: https://arxiv.org/abs/2203.02395 |
| 76 | - Decoder only: no diffusion, no encoder release |
| 77 | |
| 78 | **Architected by:** Li et al @ https://github.com/yl4579/StyleTTS2 |
| 79 | |
| 80 | **Trained by**: `@rzvzn` on Discord |
| 81 | |
| 82 | **Languages:** Multiple |
| 83 | |
| 84 | **Model SHA256 Hash:** `496dba118d1a58f5f3db2efc88dbdc216e0483fc89fe6e47ee1f2c53f18ad1e4` |
| 85 | |
| 86 | ### Training Details |
| 87 | |
| 88 | **Data:** Kokoro was trained exclusively on **permissive/non-copyrighted audio data** and IPA phoneme labels. Examples of permissive/non-copyrighted audio include: |
| 89 | - Public domain audio |
| 90 | - Audio licensed under Apache, MIT, etc |
| 91 | - Synthetic audio<sup>[1]</sup> generated by closed<sup>[2]</sup> TTS models from large providers<br/> |
| 92 | [1] https://copyright.gov/ai/ai_policy_guidance.pdf<br/> |
| 93 | [2] No synthetic audio from open TTS models or "custom voice clones" |
| 94 | |
| 95 | **Total Dataset Size:** A few hundred hours of audio |
| 96 | |
| 97 | **Total Training Cost:** About $1000 for 1000 hours of A100 80GB vRAM |
| 98 | |
| 99 | ### Creative Commons Attribution |
| 100 | |
| 101 | The following CC BY audio was part of the dataset used to train Kokoro v1.0. |
| 102 | |
| 103 | | Audio Data | Duration Used | License | Added to Training Set After | |
| 104 | | ---------- | ------------- | ------- | --------------------------- | |
| 105 | | [Koniwa](https://github.com/koniwa/koniwa) `tnc` | <1h | [CC BY 3.0](https://creativecommons.org/licenses/by/3.0/deed.ja) | v0.19 / 22 Nov 2024 | |
| 106 | | [SIWIS](https://datashare.ed.ac.uk/handle/10283/2353) | <11h | [CC BY 4.0](https://datashare.ed.ac.uk/bitstream/handle/10283/2353/license_text) | v0.19 / 22 Nov 2024 | |
| 107 | |
| 108 | ### Acknowledgements |
| 109 | |
| 110 | - 🛠️ [@yl4579](https://huggingface.co/yl4579) for architecting StyleTTS 2. |
| 111 | - 🏆 [@Pendrokar](https://huggingface.co/Pendrokar) for adding Kokoro as a contender in the TTS Spaces Arena. |
| 112 | - 📊 Thank you to everyone who contributed synthetic training data. |
| 113 | - ❤️ Special thanks to all compute sponsors. |
| 114 | - 👾 Discord server: https://discord.gg/QuGxSWBfQy |
| 115 | - 🪽 Kokoro is a Japanese word that translates to "heart" or "spirit". It is also the name of an [AI in the Terminator franchise](https://terminator.fandom.com/wiki/Kokoro). |
| 116 | |
| 117 | <img src="https://static0.gamerantimages.com/wordpress/wp-content/uploads/2024/08/terminator-zero-41-1.jpg" width="400" alt="kokoro" /> |
| 118 | |