README.md
| 1 | --- |
| 2 | license: apache-2.0 |
| 3 | tags: |
| 4 | - vision |
| 5 | - depth-estimation |
| 6 | widget: |
| 7 | - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg |
| 8 | example_title: Tiger |
| 9 | - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg |
| 10 | example_title: Teapot |
| 11 | - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg |
| 12 | example_title: Palace |
| 13 | model-index: |
| 14 | - name: dpt-hybrid-midas |
| 15 | results: |
| 16 | - task: |
| 17 | type: monocular-depth-estimation |
| 18 | name: Monocular Depth Estimation |
| 19 | dataset: |
| 20 | type: MIX-6 |
| 21 | name: MIX-6 |
| 22 | metrics: |
| 23 | - type: Zero-shot transfer |
| 24 | value: 11.06 |
| 25 | name: Zero-shot transfer |
| 26 | config: Zero-shot transfer |
| 27 | verified: false |
| 28 | |
| 29 | --- |
| 30 | |
| 31 | ## Model Details: DPT-Hybrid (also known as MiDaS 3.0) |
| 32 | |
| 33 | Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation. |
| 34 | It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. (2021) and first released in [this repository](https://github.com/isl-org/DPT). |
| 35 | DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for monocular depth estimation. |
| 36 |  |
| 37 | |
| 38 | This repository hosts the "hybrid" version of the model as stated in the paper. DPT-Hybrid diverges from DPT by using [ViT-hybrid](https://huggingface.co/google/vit-hybrid-base-bit-384) as a backbone and taking some activations from the backbone. |
| 39 | |
| 40 | The model card has been written in combination by the Hugging Face team and Intel. |
| 41 | |
| 42 | | Model Detail | Description | |
| 43 | | ----------- | ----------- | |
| 44 | | Model Authors - Company | Intel | |
| 45 | | Date | December 22, 2022 | |
| 46 | | Version | 1 | |
| 47 | | Type | Computer Vision - Monocular Depth Estimation | |
| 48 | | Paper or Other Resources | [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) and [GitHub Repo](https://github.com/isl-org/DPT) | |
| 49 | | License | Apache 2.0 | |
| 50 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/dpt-hybrid-midas/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)| |
| 51 | |
| 52 | | Intended Use | Description | |
| 53 | | ----------- | ----------- | |
| 54 | | Primary intended uses | You can use the raw model for zero-shot monocular depth estimation. See the [model hub](https://huggingface.co/models?search=dpt) to look for fine-tuned versions on a task that interests you. | |
| 55 | | Primary intended users | Anyone doing monocular depth estimation | |
| 56 | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| |
| 57 | |
| 58 | ### How to use |
| 59 | |
| 60 | Here is how to use this model for zero-shot depth estimation on an image: |
| 61 | |
| 62 | ```python |
| 63 | from PIL import Image |
| 64 | import numpy as np |
| 65 | import requests |
| 66 | import torch |
| 67 | |
| 68 | from transformers import DPTImageProcessor, DPTForDepthEstimation |
| 69 | |
| 70 | image_processor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas") |
| 71 | model = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas", low_cpu_mem_usage=True) |
| 72 | |
| 73 | url = "http://images.cocodataset.org/val2017/000000039769.jpg" |
| 74 | image = Image.open(requests.get(url, stream=True).raw) |
| 75 | |
| 76 | # prepare image for the model |
| 77 | inputs = image_processor(images=image, return_tensors="pt") |
| 78 | |
| 79 | with torch.no_grad(): |
| 80 | outputs = model(**inputs) |
| 81 | predicted_depth = outputs.predicted_depth |
| 82 | |
| 83 | # interpolate to original size |
| 84 | prediction = torch.nn.functional.interpolate( |
| 85 | predicted_depth.unsqueeze(1), |
| 86 | size=image.size[::-1], |
| 87 | mode="bicubic", |
| 88 | align_corners=False, |
| 89 | ) |
| 90 | |
| 91 | # visualize the prediction |
| 92 | output = prediction.squeeze().cpu().numpy() |
| 93 | formatted = (output * 255 / np.max(output)).astype("uint8") |
| 94 | depth = Image.fromarray(formatted) |
| 95 | depth.show() |
| 96 | ``` |
| 97 | |
| 98 | For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt). |
| 99 | |
| 100 | | Factors | Description | |
| 101 | | ----------- | ----------- | |
| 102 | | Groups | Multiple datasets compiled together | |
| 103 | | Instrumentation | - | |
| 104 | | Environment | Inference completed on Intel Xeon Platinum 8280 CPU @ 2.70GHz with 8 physical cores and an NVIDIA RTX 2080 GPU. | |
| 105 | | Card Prompts | Model deployment on alternate hardware and software will change model performance | |
| 106 | |
| 107 | | Metrics | Description | |
| 108 | | ----------- | ----------- | |
| 109 | | Model performance measures | Zero-shot Transfer | |
| 110 | | Decision thresholds | - | |
| 111 | | Approaches to uncertainty and variability | - | |
| 112 | |
| 113 | | Training and Evaluation Data | Description | |
| 114 | | ----------- | ----------- | |
| 115 | | Datasets | The dataset is called MIX 6, and contains around 1.4M images. The model was initialized with ImageNet-pretrained weights.| |
| 116 | | Motivation | To build a robust monocular depth prediction network | |
| 117 | | Preprocessing | "We resize the image such that the longer side is 384 pixels and train on random square crops of size 384. ... We perform random horizontal flips for data augmentation." See [Ranftl et al. (2021)](https://arxiv.org/abs/2103.13413) for more details. | |
| 118 | |
| 119 | ## Quantitative Analyses |
| 120 | | Model | Training set | DIW WHDR | ETH3D AbsRel | Sintel AbsRel | KITTI δ>1.25 | NYU δ>1.25 | TUM δ>1.25 | |
| 121 | | --- | --- | --- | --- | --- | --- | --- | --- | |
| 122 | | DPT - Large | MIX 6 | 10.82 (-13.2%) | 0.089 (-31.2%) | 0.270 (-17.5%) | 8.46 (-64.6%) | 8.32 (-12.9%) | 9.97 (-30.3%) | |
| 123 | | DPT - Hybrid | MIX 6 | 11.06 (-11.2%) | 0.093 (-27.6%) | 0.274 (-16.2%) | 11.56 (-51.6%) | 8.69 (-9.0%) | 10.89 (-23.2%) | |
| 124 | | MiDaS | MIX 6 | 12.95 (+3.9%) | 0.116 (-10.5%) | 0.329 (+0.5%) | 16.08 (-32.7%) | 8.71 (-8.8%) | 12.51 (-12.5%) |
| 125 | | MiDaS [30] | MIX 5 | 12.46 | 0.129 | 0.327 | 23.90 | 9.55 | 14.29 | |
| 126 | | Li [22] | MD [22] | 23.15 | 0.181 | 0.385 | 36.29 | 27.52 | 29.54 | |
| 127 | | Li [21] | MC [21] | 26.52 | 0.183 | 0.405 | 47.94 | 18.57 | 17.71 | |
| 128 | | Wang [40] | WS [40] | 19.09 | 0.205 | 0.390 | 31.92 | 29.57 | 20.18 | |
| 129 | | Xian [45] | RW [45] | 14.59 | 0.186 | 0.422 | 34.08 | 27.00 | 25.02 | |
| 130 | | Casser [5] | CS [8] | 32.80 | 0.235 | 0.422 | 21.15 | 39.58 | 37.18 | |
| 131 | |
| 132 | Table 1. Comparison to the state of the art on monocular depth estimation. We evaluate zero-shot cross-dataset transfer according to the |
| 133 | protocol defined in [30]. Relative performance is computed with respect to the original MiDaS model [30]. Lower is better for all metrics. ([Ranftl et al., 2021](https://arxiv.org/abs/2103.13413)) |
| 134 | |
| 135 | |
| 136 | | Ethical Considerations | Description | |
| 137 | | ----------- | ----------- | |
| 138 | | Data | The training data come from multiple image datasets compiled together. | |
| 139 | | Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of monocular depth image datasets. | |
| 140 | | Mitigations | No additional risk mitigation strategies were considered during model development. | |
| 141 | | Risks and harms | The extent of the risks involved by using the model remain unknown. | |
| 142 | | Use cases | - | |
| 143 | |
| 144 | | Caveats and Recommendations | |
| 145 | | ----------- | |
| 146 | | Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. | |
| 147 | |
| 148 | ### BibTeX entry and citation info |
| 149 | |
| 150 | ```bibtex |
| 151 | @article{DBLP:journals/corr/abs-2103-13413, |
| 152 | author = {Ren{\'{e}} Ranftl and |
| 153 | Alexey Bochkovskiy and |
| 154 | Vladlen Koltun}, |
| 155 | title = {Vision Transformers for Dense Prediction}, |
| 156 | journal = {CoRR}, |
| 157 | volume = {abs/2103.13413}, |
| 158 | year = {2021}, |
| 159 | url = {https://arxiv.org/abs/2103.13413}, |
| 160 | eprinttype = {arXiv}, |
| 161 | eprint = {2103.13413}, |
| 162 | timestamp = {Wed, 07 Apr 2021 15:31:46 +0200}, |
| 163 | biburl = {https://dblp.org/rec/journals/corr/abs-2103-13413.bib}, |
| 164 | bibsource = {dblp computer science bibliography, https://dblp.org} |
| 165 | } |
| 166 | ``` |