Instructions to use hf-tiny-model-private/tiny-random-MarkupLMForQuestionAnswering with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hf-tiny-model-private/tiny-random-MarkupLMForQuestionAnswering with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="hf-tiny-model-private/tiny-random-MarkupLMForQuestionAnswering")# Load model directly from transformers import AutoProcessor, AutoModelForQuestionAnswering processor = AutoProcessor.from_pretrained("hf-tiny-model-private/tiny-random-MarkupLMForQuestionAnswering") model = AutoModelForQuestionAnswering.from_pretrained("hf-tiny-model-private/tiny-random-MarkupLMForQuestionAnswering") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 12cb9e5cd0a69caf2a74d5745e1608ed1007aade7dc860a0605bfa53adec8766
- Size of remote file:
- 6.97 MB
- SHA256:
- 30565c6f8bd23f21a5f43f2556307bb6ea475b5569748b05af95f1f2cd4008c3
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.