README.md
3.6 KB · 80 lines · markdown Raw
1 ---
2 license: apache-2.0
3 datasets:
4 - sentence-transformers/msmarco
5 language:
6 - en
7 base_model:
8 - cross-encoder/ms-marco-MiniLM-L12-v2
9 pipeline_tag: text-ranking
10 library_name: sentence-transformers
11 tags:
12 - transformers
13 ---
14 # Cross-Encoder for MS Marco
15
16 This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
17
18 The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/cross_encoder/training/ms_marco)
19
20
21 ## Usage with SentenceTransformers
22
23 The usage is easy when you have [SentenceTransformers](https://www.sbert.net/) installed. Then you can use the pre-trained models like this:
24 ```python
25 from sentence_transformers import CrossEncoder
26
27 model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L6-v2')
28 scores = model.predict([
29 ("How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."),
30 ("How many people live in Berlin?", "Berlin is well known for its museums."),
31 ])
32 print(scores)
33 # [ 8.607138 -4.320078]
34 ```
35
36
37 ## Usage with Transformers
38
39 ```python
40 from transformers import AutoTokenizer, AutoModelForSequenceClassification
41 import torch
42
43 model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L6-v2')
44 tokenizer = AutoTokenizer.from_pretrained('cross-encoder/ms-marco-MiniLM-L6-v2')
45
46 features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
47
48 model.eval()
49 with torch.no_grad():
50 scores = model(**features).logits
51 print(scores)
52 ```
53
54
55 ## Performance
56 In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
57
58
59 | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
60 | ------------- |:-------------| -----| --- |
61 | **Version 2 models** | | |
62 | cross-encoder/ms-marco-TinyBERT-L2-v2 | 69.84 | 32.56 | 9000
63 | cross-encoder/ms-marco-MiniLM-L2-v2 | 71.01 | 34.85 | 4100
64 | cross-encoder/ms-marco-MiniLM-L4-v2 | 73.04 | 37.70 | 2500
65 | cross-encoder/ms-marco-MiniLM-L6-v2 | 74.30 | 39.01 | 1800
66 | cross-encoder/ms-marco-MiniLM-L12-v2 | 74.31 | 39.02 | 960
67 | **Version 1 models** | | |
68 | cross-encoder/ms-marco-TinyBERT-L2 | 67.43 | 30.15 | 9000
69 | cross-encoder/ms-marco-TinyBERT-L4 | 68.09 | 34.50 | 2900
70 | cross-encoder/ms-marco-TinyBERT-L6 | 69.57 | 36.13 | 680
71 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
72 | **Other models** | | |
73 | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
74 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
75 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
76 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
77 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
78 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
79
80 Note: Runtime was computed on a V100 GPU.