Video Classification
Transformers
PyTorch
English
xclip
feature-extraction
vision
Eval Results (legacy)
Instructions to use microsoft/xclip-large-patch14-16-frames with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/xclip-large-patch14-16-frames with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("video-classification", model="microsoft/xclip-large-patch14-16-frames")# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("microsoft/xclip-large-patch14-16-frames") model = AutoModel.from_pretrained("microsoft/xclip-large-patch14-16-frames") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 41f4308f8b4faecc9e26e75433aa61ea55583dc09edc3b909d82d6be6a49b812
- Size of remote file:
- 2.3 GB
- SHA256:
- dd3f78a70f684b572ed01102bcc6abca1dce342802e7067869d32931deca7abd
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.