README.md
| 1 | --- |
| 2 | library_name: transformers |
| 3 | library: transformers |
| 4 | license: cc-by-nc-4.0 |
| 5 | tags: |
| 6 | - depth |
| 7 | - relative depth |
| 8 | pipeline_tag: depth-estimation |
| 9 | widget: |
| 10 | - inference: false |
| 11 | --- |
| 12 | |
| 13 | # Depth Anything V2 Base – Transformers Version |
| 14 | |
| 15 | Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features: |
| 16 | - more fine-grained details than Depth Anything V1 |
| 17 | - more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard) |
| 18 | - more efficient (10x faster) and more lightweight than SD-based models |
| 19 | - impressive fine-tuned performance with our pre-trained models |
| 20 | |
| 21 | This model checkpoint is compatible with the transformers library. |
| 22 | |
| 23 | Depth Anything V2 was introduced in [the paper of the same name](https://arxiv.org/abs/2406.09414) by Lihe Yang et al. It uses the same architecture as the original Depth Anything release, but uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions. The original Depth Anything model was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al., and was first released in [this repository](https://github.com/LiheYoung/Depth-Anything). |
| 24 | |
| 25 | [Online demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2). |
| 26 | |
| 27 | ## Model description |
| 28 | |
| 29 | Depth Anything V2 leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone. |
| 30 | |
| 31 | The model is trained on ~600K synthetic labeled images and ~62 million real unlabeled images, obtaining state-of-the-art results for both relative and absolute depth estimation. |
| 32 | |
| 33 | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg" |
| 34 | alt="drawing" width="600"/> |
| 35 | |
| 36 | <small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small> |
| 37 | |
| 38 | ## Intended uses & limitations |
| 39 | |
| 40 | You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for |
| 41 | other versions on a task that interests you. |
| 42 | |
| 43 | ### How to use |
| 44 | |
| 45 | Here is how to use this model to perform zero-shot depth estimation: |
| 46 | |
| 47 | ```python |
| 48 | from transformers import pipeline |
| 49 | from PIL import Image |
| 50 | import requests |
| 51 | |
| 52 | # load pipe |
| 53 | pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Large-hf") |
| 54 | |
| 55 | # load image |
| 56 | url = 'http://images.cocodataset.org/val2017/000000039769.jpg' |
| 57 | image = Image.open(requests.get(url, stream=True).raw) |
| 58 | |
| 59 | # inference |
| 60 | depth = pipe(image)["depth"] |
| 61 | ``` |
| 62 | |
| 63 | Alternatively, you can use the model and processor classes: |
| 64 | |
| 65 | ```python |
| 66 | from transformers import AutoImageProcessor, AutoModelForDepthEstimation |
| 67 | import torch |
| 68 | import numpy as np |
| 69 | from PIL import Image |
| 70 | import requests |
| 71 | |
| 72 | url = "http://images.cocodataset.org/val2017/000000039769.jpg" |
| 73 | image = Image.open(requests.get(url, stream=True).raw) |
| 74 | |
| 75 | image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Large-hf") |
| 76 | model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Large-hf") |
| 77 | |
| 78 | # prepare image for the model |
| 79 | inputs = image_processor(images=image, return_tensors="pt") |
| 80 | |
| 81 | with torch.no_grad(): |
| 82 | outputs = model(**inputs) |
| 83 | predicted_depth = outputs.predicted_depth |
| 84 | |
| 85 | # interpolate to original size |
| 86 | prediction = torch.nn.functional.interpolate( |
| 87 | predicted_depth.unsqueeze(1), |
| 88 | size=image.size[::-1], |
| 89 | mode="bicubic", |
| 90 | align_corners=False, |
| 91 | ) |
| 92 | ``` |
| 93 | |
| 94 | For more code examples, please refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#). |
| 95 | |
| 96 | |
| 97 | ### Citation |
| 98 | |
| 99 | ```bibtex |
| 100 | @misc{yang2024depth, |
| 101 | title={Depth Anything V2}, |
| 102 | author={Lihe Yang and Bingyi Kang and Zilong Huang and Zhen Zhao and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao}, |
| 103 | year={2024}, |
| 104 | eprint={2406.09414}, |
| 105 | archivePrefix={arXiv}, |
| 106 | primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'} |
| 107 | } |
| 108 | ``` |
| 109 | |