README.md
| 1 | --- |
| 2 | license: "cc-by-nc-4.0" |
| 3 | tags: |
| 4 | - vision |
| 5 | - video-classification |
| 6 | --- |
| 7 | |
| 8 | # VideoMAE (large-sized model, pre-trained only) |
| 9 | |
| 10 | VideoMAE model pre-trained on Kinetics-400 for 1600 epochs in a self-supervised way. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE). |
| 11 | |
| 12 | Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team. |
| 13 | |
| 14 | ## Model description |
| 15 | |
| 16 | VideoMAE is an extension of [Masked Autoencoders (MAE)](https://arxiv.org/abs/2111.06377) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches. |
| 17 | |
| 18 | Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder. |
| 19 | |
| 20 | By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled videos for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire video. |
| 21 | |
| 22 | ## Intended uses & limitations |
| 23 | |
| 24 | You can use the raw model for predicting pixel values for masked patches of a video, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=videomae) to look for fine-tuned versions on a task that interests you. |
| 25 | |
| 26 | ### How to use |
| 27 | |
| 28 | Here is how to use this model to predict pixel values for randomly masked patches: |
| 29 | |
| 30 | ```python |
| 31 | from transformers import VideoMAEImageProcessor, VideoMAEForPreTraining |
| 32 | import numpy as np |
| 33 | import torch |
| 34 | |
| 35 | num_frames = 16 |
| 36 | video = list(np.random.randn(16, 3, 224, 224)) |
| 37 | |
| 38 | processor = VideoMAEImageProcessor.from_pretrained("MCG-NJU/videomae-large") |
| 39 | model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-large") |
| 40 | |
| 41 | pixel_values = processor(video, return_tensors="pt").pixel_values |
| 42 | |
| 43 | num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 |
| 44 | seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame |
| 45 | bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() |
| 46 | |
| 47 | outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) |
| 48 | loss = outputs.loss |
| 49 | ``` |
| 50 | |
| 51 | For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/videomae.html#). |
| 52 | |
| 53 | ## Training data |
| 54 | |
| 55 | (to do, feel free to open a PR) |
| 56 | |
| 57 | ## Training procedure |
| 58 | |
| 59 | ### Preprocessing |
| 60 | |
| 61 | (to do, feel free to open a PR) |
| 62 | |
| 63 | ### Pretraining |
| 64 | |
| 65 | (to do, feel free to open a PR) |
| 66 | |
| 67 | ## Evaluation results |
| 68 | |
| 69 | (to do, feel free to open a PR) |
| 70 | |
| 71 | ### BibTeX entry and citation info |
| 72 | |
| 73 | ```bibtex |
| 74 | misc{https://doi.org/10.48550/arxiv.2203.12602, |
| 75 | doi = {10.48550/ARXIV.2203.12602}, |
| 76 | url = {https://arxiv.org/abs/2203.12602}, |
| 77 | author = {Tong, Zhan and Song, Yibing and Wang, Jue and Wang, Limin}, |
| 78 | keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, |
| 79 | title = {VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training}, |
| 80 | publisher = {arXiv}, |
| 81 | year = {2022}, |
| 82 | copyright = {Creative Commons Attribution 4.0 International} |
| 83 | } |
| 84 | ``` |