README.md
| 1 | --- |
| 2 | license: apache-2.0 |
| 3 | base_model: facebook/hubert-base-ls960 |
| 4 | tags: |
| 5 | - audio-classification |
| 6 | - deepfake |
| 7 | - audio-spoof |
| 8 | - generated_from_trainer |
| 9 | metrics: |
| 10 | - accuracy |
| 11 | model-index: |
| 12 | - name: hubert-base-960h-itw-deepfake |
| 13 | results: [] |
| 14 | language: |
| 15 | - en |
| 16 | --- |
| 17 | |
| 18 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| 19 | should probably proofread and complete it, then remove this comment. --> |
| 20 | |
| 21 | # hubert-base-960h-itw-deepfake |
| 22 | |
| 23 | This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset. |
| 24 | It achieves the following results on the evaluation set: |
| 25 | - Loss: 0.0756 |
| 26 | - Accuracy: 0.9873 |
| 27 | - FAR: 0.0083 |
| 28 | - FRR: 0.0203 |
| 29 | - EER: 0.0143 |
| 30 | |
| 31 | ## Model description |
| 32 | |
| 33 | ### Quick Use |
| 34 | |
| 35 | ```python |
| 36 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
| 37 | |
| 38 | config = AutoConfig.from_pretrained("abhishtagatya/hubert-base-960h-itw-deepfake") |
| 39 | feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("abhishtagatya/hubert-base-960h-itw-deepfake") |
| 40 | |
| 41 | model = HubertForSequenceClassification.from_pretrained("abhishtagatya/hubert-base-960h-itw-deepfake", config=config).to(device) |
| 42 | |
| 43 | # Your Logic Here |
| 44 | ``` |
| 45 | |
| 46 | ## Intended uses & limitations |
| 47 | |
| 48 | More information needed |
| 49 | |
| 50 | ## Training and evaluation data |
| 51 | |
| 52 | More information needed |
| 53 | |
| 54 | ## Training procedure |
| 55 | |
| 56 | ### Training hyperparameters |
| 57 | |
| 58 | The following hyperparameters were used during training: |
| 59 | - learning_rate: 1e-06 |
| 60 | - train_batch_size: 2 |
| 61 | - eval_batch_size: 2 |
| 62 | - seed: 42 |
| 63 | - gradient_accumulation_steps: 2 |
| 64 | - total_train_batch_size: 4 |
| 65 | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| 66 | - lr_scheduler_type: linear |
| 67 | - num_epochs: 2.0 |
| 68 | |
| 69 | ### Training results |
| 70 | |
| 71 | | Training Loss | Epoch | Step | Validation Loss | Accuracy | FAR | FRR | EER | |
| 72 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:------:| |
| 73 | | 0.4081 | 0.39 | 2500 | 0.1152 | 0.9722 | 0.0285 | 0.0267 | 0.0276 | |
| 74 | | 0.1168 | 0.79 | 5000 | 0.0822 | 0.9844 | 0.0120 | 0.0216 | 0.0168 | |
| 75 | | 0.0979 | 1.18 | 7500 | 0.0896 | 0.9846 | 0.0130 | 0.0195 | 0.0162 | |
| 76 | | 0.0983 | 1.57 | 10000 | 0.1007 | 0.9833 | 0.0155 | 0.0186 | 0.0171 | |
| 77 | | 0.0901 | 1.97 | 12500 | 0.0756 | 0.9873 | 0.0083 | 0.0203 | 0.0143 | |
| 78 | |
| 79 | |
| 80 | ### Framework versions |
| 81 | |
| 82 | - Transformers 4.38.0.dev0 |
| 83 | - Pytorch 2.1.2+cu121 |
| 84 | - Datasets 2.16.2.dev0 |
| 85 | - Tokenizers 0.15.1 |