README.md
| 1 | --- |
| 2 | language: |
| 3 | - en |
| 4 | thumbnail: "https://www.onebraveidea.org/wp-content/uploads/2019/07/OBI-Logo-Website.png" |
| 5 | tags: |
| 6 | - deidentification |
| 7 | - medical notes |
| 8 | - ehr |
| 9 | - phi |
| 10 | datasets: |
| 11 | - I2B2 |
| 12 | metrics: |
| 13 | - F1 |
| 14 | - Recall |
| 15 | - Precision |
| 16 | widget: |
| 17 | - text: "Physician Discharge Summary Admit date: 10/12/1982 Discharge date: 10/22/1982 Patient Information Jack Reacher, 54 y.o. male (DOB = 1/21/1928)." |
| 18 | - text: "Home Address: 123 Park Drive, San Diego, CA, 03245. Home Phone: 202-555-0199 (home)." |
| 19 | - text: "Hospital Care Team Service: Orthopedics Inpatient Attending: Roger C Kelly, MD Attending phys phone: (634)743-5135 Discharge Unit: HCS843 Primary Care Physician: Hassan V Kim, MD 512-832-5025." |
| 20 | license: mit |
| 21 | --- |
| 22 | |
| 23 | # Model Description |
| 24 | |
| 25 | * A RoBERTa [[Liu et al., 2019]](https://arxiv.org/pdf/1907.11692.pdf) model fine-tuned for de-identification of medical notes. |
| 26 | * Sequence Labeling (token classification): The model was trained to predict protected health information (PHI/PII) entities (spans). A list of protected health information categories is given by [HIPAA](https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html). |
| 27 | * A token can either be classified as non-PHI or as one of the 11 PHI types. Token predictions are aggregated to spans by making use of BILOU tagging. |
| 28 | * The PHI labels that were used for training and other details can be found here: [Annotation Guidelines](https://github.com/obi-ml-public/ehr_deidentification/blob/master/AnnotationGuidelines.md) |
| 29 | * More details on how to use this model, the format of data and other useful information is present in the GitHub repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification). |
| 30 | |
| 31 | |
| 32 | # How to use |
| 33 | |
| 34 | * A demo on how the model works (using model predictions to de-identify a medical note) is on this space: [Medical-Note-Deidentification](https://huggingface.co/spaces/obi/Medical-Note-Deidentification). |
| 35 | * Steps on how this model can be used to run a forward pass can be found here: [Forward Pass](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/forward_pass) |
| 36 | * In brief, the steps are: |
| 37 | * Sentencize (the model aggregates the sentences back to the note level) and tokenize the dataset. |
| 38 | * Use the predict function of this model to gather the predictions (i.e., predictions for each token). |
| 39 | * Additionally, the model predictions can be used to remove PHI from the original note/text. |
| 40 | |
| 41 | |
| 42 | # Dataset |
| 43 | |
| 44 | * The I2B2 2014 [[Stubbs and Uzuner, 2015]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4978170/) dataset was used to train this model. |
| 45 | |
| 46 | | | I2B2 | | I2B2 | | |
| 47 | | --------- | --------------------- | ---------- | -------------------- | ---------- | |
| 48 | | | TRAIN SET - 790 NOTES | | TEST SET - 514 NOTES | | |
| 49 | | PHI LABEL | COUNT | PERCENTAGE | COUNT | PERCENTAGE | |
| 50 | | DATE | 7502 | 43.69 | 4980 | 44.14 | |
| 51 | | STAFF | 3149 | 18.34 | 2004 | 17.76 | |
| 52 | | HOSP | 1437 | 8.37 | 875 | 7.76 | |
| 53 | | AGE | 1233 | 7.18 | 764 | 6.77 | |
| 54 | | LOC | 1206 | 7.02 | 856 | 7.59 | |
| 55 | | PATIENT | 1316 | 7.66 | 879 | 7.79 | |
| 56 | | PHONE | 317 | 1.85 | 217 | 1.92 | |
| 57 | | ID | 881 | 5.13 | 625 | 5.54 | |
| 58 | | PATORG | 124 | 0.72 | 82 | 0.73 | |
| 59 | | EMAIL | 4 | 0.02 | 1 | 0.01 | |
| 60 | | OTHERPHI | 2 | 0.01 | 0 | 0 | |
| 61 | | TOTAL | 17171 | 100 | 11283 | 100 | |
| 62 | |
| 63 | |
| 64 | # Training procedure |
| 65 | |
| 66 | * Steps on how this model was trained can be found here: [Training](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/train). The "model_name_or_path" was set to: "roberta-large". |
| 67 | * The dataset was sentencized with the en_core_sci_sm sentencizer from spacy. |
| 68 | * The dataset was then tokenized with a custom tokenizer built on top of the en_core_sci_sm tokenizer from spacy. |
| 69 | * For each sentence we added 32 tokens on the left (from previous sentences) and 32 tokens on the right (from the next sentences). |
| 70 | * The added tokens are not used for learning - i.e, the loss is not computed on these tokens - they are used as additional context. |
| 71 | * Each sequence contained a maximum of 128 tokens (including the 32 tokens added on). Longer sequences were split. |
| 72 | * The sentencized and tokenized dataset with the token level labels based on the BILOU notation was used to train the model. |
| 73 | * The model is fine-tuned from a pre-trained RoBERTa model. |
| 74 | |
| 75 | * Training details: |
| 76 | * Input sequence length: 128 |
| 77 | * Batch size: 32 (16 with 2 gradient accumulation steps) |
| 78 | * Optimizer: AdamW |
| 79 | * Learning rate: 5e-5 |
| 80 | * Dropout: 0.1 |
| 81 | |
| 82 | |
| 83 | ## Results |
| 84 | |
| 85 | # Questions? |
| 86 | |
| 87 | Post a Github issue on the repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification). |
| 88 | |