README.md
5.8 KB · 118 lines · markdown Raw
1 ---
2 license: apache-2.0
3 tags:
4 - object-detection
5 - vision
6 datasets:
7 - coco
8 widget:
9 - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
10 example_title: Savanna
11 - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
12 example_title: Football Match
13 - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
14 example_title: Airport
15 ---
16
17 # DETR (End-to-End Object Detection) model with ResNet-50 backbone
18
19 DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
20
21 Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
22
23 ## Model description
24
25 The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
26
27 The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
28
29 ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/detr_architecture.png)
30
31 ## Intended uses & limitations
32
33 You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
34
35 ### How to use
36
37 Here is how to use this model:
38
39 ```python
40 from transformers import DetrImageProcessor, DetrForObjectDetection
41 import torch
42 from PIL import Image
43 import requests
44
45 url = "http://images.cocodataset.org/val2017/000000039769.jpg"
46 image = Image.open(requests.get(url, stream=True).raw)
47
48 # you can specify the revision tag if you don't want the timm dependency
49 processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
50 model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
51
52 inputs = processor(images=image, return_tensors="pt")
53 outputs = model(**inputs)
54
55 # convert outputs (bounding boxes and class logits) to COCO API
56 # let's only keep detections with score > 0.9
57 target_sizes = torch.tensor([image.size[::-1]])
58 results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0]
59
60 for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
61 box = [round(i, 2) for i in box.tolist()]
62 print(
63 f"Detected {model.config.id2label[label.item()]} with confidence "
64 f"{round(score.item(), 3)} at location {box}"
65 )
66 ```
67 This should output:
68 ```
69 Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98]
70 Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66]
71 Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76]
72 Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93]
73 Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72]
74 ```
75
76 Currently, both the feature extractor and model support PyTorch.
77
78 ## Training data
79
80 The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
81
82 ## Training procedure
83
84 ### Preprocessing
85
86 The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
87
88 Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
89
90 ### Training
91
92 The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
93
94 ## Evaluation results
95
96 This model achieves an AP (average precision) of **42.0** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
97 ### BibTeX entry and citation info
98
99 ```bibtex
100 @article{DBLP:journals/corr/abs-2005-12872,
101 author = {Nicolas Carion and
102 Francisco Massa and
103 Gabriel Synnaeve and
104 Nicolas Usunier and
105 Alexander Kirillov and
106 Sergey Zagoruyko},
107 title = {End-to-End Object Detection with Transformers},
108 journal = {CoRR},
109 volume = {abs/2005.12872},
110 year = {2020},
111 url = {https://arxiv.org/abs/2005.12872},
112 archivePrefix = {arXiv},
113 eprint = {2005.12872},
114 timestamp = {Thu, 28 May 2020 17:38:09 +0200},
115 biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
116 bibsource = {dblp computer science bibliography, https://dblp.org}
117 }
118 ```