README.md
2.9 KB · 73 lines · markdown Raw
1 ---
2 license: mit
3 base_model: xlm-roberta-base
4 tags:
5 - generated_from_trainer
6 - NER
7 - crypto
8 metrics:
9 - f1
10 model-index:
11 - name: xlm-roberta-base-finetuned-ner-crypto
12 results: []
13 widget:
14 - text:
15 "Didn't I tell you that that was a decent entry point on $PROPHET? If you are in - congrats, Prophet is up 90% in the last 2 weeks and 50% up in the last week alone"
16 pipeline_tag: token-classification
17 ---
18
19 <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20 should probably proofread and complete it, then remove this comment. -->
21
22 # cryptoNER
23
24 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
25 It achieves the following results on the evaluation set:
26 - Loss: 0.0058
27 - F1: 0.9970
28
29 ## Model description
30
31 This model is a fine-tuned version of xlm-roberta-base, specializing in Named Entity Recognition (NER) within the cryptocurrency domain. It is optimized to recognize and classify entities such as cryptocurrency TICKER SYMBOL, NAME, and blockscanner ADDRESS within text.
32
33 ## Intended uses
34 Designed primarily for NER tasks in the cryptocurrency sector, this model excels in identifying and categorizing ticker symbol, token name, and blockscanner address in textual content.
35
36
37 ## Limitations
38
39 Performance may be subpar when the model encounters entities outside its training data or infrequently occurring entities within the cryptocurrency domain. The model might also be susceptible to variations in entity presentation and context.
40 ## Training and evaluation data
41
42 The model was trained using a diverse dataset, including artificially generated tweets and ERC20 token metadata fetched through the Covalent API (https://www.covalenthq.com/docs/unified-api/). GPT was employed to generate 500 synthetic tweets tailored for the cryptocurrency domain. The Covalent API was instrumental in obtaining a rich set of 20K+ unique ERC20 token metadata entries, enhancing the model's understanding and recognition of cryptocurrency entities.
43 ## Training procedure
44
45 ### Training hyperparameters
46
47 The following hyperparameters were used during training:
48 - learning_rate: 5e-05
49 - train_batch_size: 32
50 - eval_batch_size: 32
51 - seed: 42
52 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53 - lr_scheduler_type: linear
54 - num_epochs: 6
55
56 ### Training results
57
58 | Training Loss | Epoch | Step | Validation Loss | F1 |
59 |:-------------:|:-----:|:----:|:---------------:|:------:|
60 | 0.0269 | 1.0 | 750 | 0.0080 | 0.9957 |
61 | 0.0049 | 2.0 | 1500 | 0.0074 | 0.9960 |
62 | 0.0042 | 3.0 | 2250 | 0.0074 | 0.9965 |
63 | 0.0034 | 4.0 | 3000 | 0.0058 | 0.9971 |
64 | 0.0028 | 5.0 | 3750 | 0.0059 | 0.9971 |
65 | 0.0024 | 6.0 | 4500 | 0.0058 | 0.9970 |
66
67
68 ### Framework versions
69
70 - Transformers 4.34.1
71 - Pytorch 2.1.0+cu118
72 - Datasets 2.14.6
73 - Tokenizers 0.14.1