Image-Text-to-Text
Transformers
Safetensors
English
idefics2
multimodal
vision
text-generation-inference
Instructions to use HuggingFaceM4/idefics2-8b-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HuggingFaceM4/idefics2-8b-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="HuggingFaceM4/idefics2-8b-base")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b-base") model = AutoModelForImageTextToText.from_pretrained("HuggingFaceM4/idefics2-8b-base") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use HuggingFaceM4/idefics2-8b-base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HuggingFaceM4/idefics2-8b-base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceM4/idefics2-8b-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/HuggingFaceM4/idefics2-8b-base
- SGLang
How to use HuggingFaceM4/idefics2-8b-base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HuggingFaceM4/idefics2-8b-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceM4/idefics2-8b-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HuggingFaceM4/idefics2-8b-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceM4/idefics2-8b-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use HuggingFaceM4/idefics2-8b-base with Docker Model Runner:
docker model run hf.co/HuggingFaceM4/idefics2-8b-base
Commit ·
6e9baf5
1
Parent(s): 1cb5729
tiny fix with connector update
Browse files- special_tokens_map.json +0 -7
- tokenizer.json +0 -9
- tokenizer_config.json +1 -10
special_tokens_map.json
CHANGED
|
@@ -13,13 +13,6 @@
|
|
| 13 |
"normalized": false,
|
| 14 |
"rstrip": false,
|
| 15 |
"single_word": false
|
| 16 |
-
},
|
| 17 |
-
{
|
| 18 |
-
"content": "<end_of_utterance>",
|
| 19 |
-
"lstrip": false,
|
| 20 |
-
"normalized": false,
|
| 21 |
-
"rstrip": false,
|
| 22 |
-
"single_word": false
|
| 23 |
}
|
| 24 |
],
|
| 25 |
"bos_token": {
|
|
|
|
| 13 |
"normalized": false,
|
| 14 |
"rstrip": false,
|
| 15 |
"single_word": false
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
}
|
| 17 |
],
|
| 18 |
"bos_token": {
|
tokenizer.json
CHANGED
|
@@ -47,15 +47,6 @@
|
|
| 47 |
"rstrip": false,
|
| 48 |
"normalized": false,
|
| 49 |
"special": true
|
| 50 |
-
},
|
| 51 |
-
{
|
| 52 |
-
"id": 32002,
|
| 53 |
-
"content": "<end_of_utterance>",
|
| 54 |
-
"single_word": false,
|
| 55 |
-
"lstrip": false,
|
| 56 |
-
"rstrip": false,
|
| 57 |
-
"normalized": false,
|
| 58 |
-
"special": true
|
| 59 |
}
|
| 60 |
],
|
| 61 |
"normalizer": {
|
|
|
|
| 47 |
"rstrip": false,
|
| 48 |
"normalized": false,
|
| 49 |
"special": true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
}
|
| 51 |
],
|
| 52 |
"normalizer": {
|
tokenizer_config.json
CHANGED
|
@@ -41,20 +41,11 @@
|
|
| 41 |
"rstrip": false,
|
| 42 |
"single_word": false,
|
| 43 |
"special": true
|
| 44 |
-
},
|
| 45 |
-
"32002": {
|
| 46 |
-
"content": "<end_of_utterance>",
|
| 47 |
-
"lstrip": false,
|
| 48 |
-
"normalized": false,
|
| 49 |
-
"rstrip": false,
|
| 50 |
-
"single_word": false,
|
| 51 |
-
"special": true
|
| 52 |
}
|
| 53 |
},
|
| 54 |
"additional_special_tokens": [
|
| 55 |
"<fake_token_around_image>",
|
| 56 |
-
"<image>"
|
| 57 |
-
"<end_of_utterance>"
|
| 58 |
],
|
| 59 |
"bos_token": "<s>",
|
| 60 |
"clean_up_tokenization_spaces": false,
|
|
|
|
| 41 |
"rstrip": false,
|
| 42 |
"single_word": false,
|
| 43 |
"special": true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
}
|
| 45 |
},
|
| 46 |
"additional_special_tokens": [
|
| 47 |
"<fake_token_around_image>",
|
| 48 |
+
"<image>"
|
|
|
|
| 49 |
],
|
| 50 |
"bos_token": "<s>",
|
| 51 |
"clean_up_tokenization_spaces": false,
|