README.md
3.5 KB · 83 lines · markdown Raw
1 ---
2 license: "cc-by-nc-4.0"
3 tags:
4 - vision
5 - video-classification
6 ---
7
8 # VideoMAE (base-sized model, fine-tuned on Kinetics-400)
9
10 VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
11
12 Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
13
14 ## Model description
15
16 VideoMAE is an extension of [Masked Autoencoders (MAE)](https://arxiv.org/abs/2111.06377) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches.
17
18 Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder.
19
20 By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled videos for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire video.
21
22 ## Intended uses & limitations
23
24 You can use the raw model for video classification into one of the 400 possible Kinetics-400 labels.
25
26 ### How to use
27
28 Here is how to use this model to classify a video:
29
30 ```python
31 from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
32 import numpy as np
33 import torch
34
35 video = list(np.random.randn(16, 3, 224, 224))
36
37 processor = VideoMAEImageProcessor.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics")
38 model = VideoMAEForVideoClassification.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics")
39
40 inputs = processor(video, return_tensors="pt")
41
42 with torch.no_grad():
43 outputs = model(**inputs)
44 logits = outputs.logits
45
46 predicted_class_idx = logits.argmax(-1).item()
47 print("Predicted class:", model.config.id2label[predicted_class_idx])
48 ```
49
50 For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/videomae.html#).
51
52 ## Training data
53
54 (to do, feel free to open a PR)
55
56 ## Training procedure
57
58 ### Preprocessing
59
60 (to do, feel free to open a PR)
61
62 ### Pretraining
63
64 (to do, feel free to open a PR)
65
66 ## Evaluation results
67
68 This model obtains a top-1 accuracy of 80.9 and a top-5 accuracy of 94.7 on the test set of Kinetics-400.
69
70 ### BibTeX entry and citation info
71
72 ```bibtex
73 misc{https://doi.org/10.48550/arxiv.2203.12602,
74 doi = {10.48550/ARXIV.2203.12602},
75 url = {https://arxiv.org/abs/2203.12602},
76 author = {Tong, Zhan and Song, Yibing and Wang, Jue and Wang, Limin},
77 keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
78 title = {VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training},
79 publisher = {arXiv},
80 year = {2022},
81 copyright = {Creative Commons Attribution 4.0 International}
82 }
83 ```