Instructions to use unsloth/grok-2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use unsloth/grok-2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="unsloth/grok-2") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("unsloth/grok-2") model = AutoModelForCausalLM.from_pretrained("unsloth/grok-2") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use unsloth/grok-2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "unsloth/grok-2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "unsloth/grok-2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/unsloth/grok-2
- SGLang
How to use unsloth/grok-2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "unsloth/grok-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "unsloth/grok-2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "unsloth/grok-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "unsloth/grok-2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use unsloth/grok-2 with Docker Model Runner:
docker model run hf.co/unsloth/grok-2
Grok-2 Tokenizer
A ๐ค-compatible version of the Grok-2 tokenizer (adapted from xai-org/grok-2).
This means it can be used with Hugging Face libraries including Transformers, Tokenizers, and Transformers.js.
Motivation
As Grok 2.5 aka. xai-org/grok-2 has been recently released on the ๐ค Hub with SGLang
native support, but the checkpoints on the Hub won't come with a Hugging Face compatible tokenizer, but rather with a tiktoken-based
JSON export, which is internally read and patched in SGLang.
This repository then contains the Hugging Face compatible export so that users can easily interact and play around with the Grok-2 tokenizer, besides that allowing to use it via SGLang without having to pull the repository manually from the Hub and then using a mount, to prevent from directly having to point to the tokenizer path, so that Grok-2 can be deployed as:
python3 -m sglang.launch_server --model-path xai-org/grok-2 --tokenizer-path alvarobartt/grok-2-tokenizer --tp-size 8 --quantization fp8 --attention-backend triton
Rather than the former 2-step process:
hf download xai-org/grok-2 --local-dir /local/grok-2
python3 -m sglang.launch_server --model-path /local/grok-2 --tokenizer-path /local/grok-2/tokenizer.tok.json --tp-size 8 --quantization fp8 --attention-backend triton
Example
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("alvarobartt/grok-2-tokenizer")
assert tokenizer.encode("Human: What is Deep Learning?<|separator|>\n\n") == [
35406,
186,
2171,
458,
17454,
14803,
191,
1,
417,
]
assert (
tokenizer.apply_chat_template(
[{"role": "user", "content": "What is the capital of France?"}], tokenize=False
)
== "Human: What is the capital of France?<|separator|>\n\n"
)
This repository has been inspired by earlier similar work by Xenova in
Xenova/grok-1-tokenizer.
- Downloads last month
- 86