README.md
2.0 KB · 82 lines · markdown Raw
1 ---
2 license: apache-2.0
3 base_model: google/vit-base-patch16-224-in21k
4 tags:
5 - image-classification
6 - vision
7 - generated_from_trainer
8 datasets:
9 - imagefolder
10 metrics:
11 - accuracy
12 model-index:
13 - name: rorshark-vit-base
14 results:
15 - task:
16 name: Image Classification
17 type: image-classification
18 dataset:
19 name: imagefolder
20 type: imagefolder
21 config: default
22 split: train
23 args: default
24 metrics:
25 - name: Accuracy
26 type: accuracy
27 value: 0.9922928709055877
28 ---
29
30 <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31 should probably proofread and complete it, then remove this comment. -->
32
33 # rorshark-vit-base
34
35 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
36 It achieves the following results on the evaluation set:
37 - Loss: 0.0393
38 - Accuracy: 0.9923
39
40 ## Model description
41
42 More information needed
43
44 ## Intended uses & limitations
45
46 More information needed
47
48 ## Training and evaluation data
49
50 More information needed
51
52 ## Training procedure
53
54 ### Training hyperparameters
55
56 The following hyperparameters were used during training:
57 - learning_rate: 2e-05
58 - train_batch_size: 8
59 - eval_batch_size: 8
60 - seed: 1337
61 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62 - lr_scheduler_type: linear
63 - num_epochs: 5.0
64
65 ### Training results
66
67 | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68 |:-------------:|:-----:|:----:|:---------------:|:--------:|
69 | 0.0597 | 1.0 | 368 | 0.0546 | 0.9865 |
70 | 0.2009 | 2.0 | 736 | 0.0531 | 0.9865 |
71 | 0.0114 | 3.0 | 1104 | 0.0418 | 0.9904 |
72 | 0.0998 | 4.0 | 1472 | 0.0425 | 0.9904 |
73 | 0.1244 | 5.0 | 1840 | 0.0393 | 0.9923 |
74
75
76 ### Framework versions
77
78 - Transformers 4.36.0.dev0
79 - Pytorch 2.1.1+cu118
80 - Datasets 2.15.0
81 - Tokenizers 0.15.0
82