Tiny dummy models
Collection
Randomly initialized tiny models for debugging/testing purpose • 176 items • Updated • 6
How to use yujiepan/grok-1-tiny-random with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="yujiepan/grok-1-tiny-random", trust_remote_code=True) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("yujiepan/grok-1-tiny-random", trust_remote_code=True, dtype="auto")How to use yujiepan/grok-1-tiny-random with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "yujiepan/grok-1-tiny-random"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "yujiepan/grok-1-tiny-random",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/yujiepan/grok-1-tiny-random
How to use yujiepan/grok-1-tiny-random with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "yujiepan/grok-1-tiny-random" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "yujiepan/grok-1-tiny-random",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "yujiepan/grok-1-tiny-random" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "yujiepan/grok-1-tiny-random",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use yujiepan/grok-1-tiny-random with Docker Model Runner:
docker model run hf.co/yujiepan/grok-1-tiny-random
This model is randomly initialized, using the config from hpcai-tech/grok-1 but with smaller size. Note the model is in float16.
Codes:
import transformers
import torch
import os
from huggingface_hub import create_repo, upload_folder
source_model_id = 'hpcai-tech/grok-1'
tiny_random_name = 'grok-1-tiny-random'
save_path = f'/tmp/yujiepan/{tiny_random_name}'
repo_id = f'yujiepan/{tiny_random_name}'
config = transformers.AutoConfig.from_pretrained(
source_model_id, trust_remote_code=True)
config.hidden_size = 4
config.intermediate_size = 8
config.num_attention_heads = 2
config.num_key_value_heads = 1
config.num_hidden_layers = 2
config.torch_dtype = torch.float16
model = transformers.AutoModelForCausalLM.from_config(
config, trust_remote_code=True, torch_dtype=torch.float16)
model = model.half()
tokenizer = transformers.AutoTokenizer.from_pretrained(
source_model_id, trust_remote_code=True)
result = transformers.pipelines.pipeline(
'text-generation',
model=model, tokenizer=tokenizer,
device=0,
max_new_tokens=16,
)('Hello')
print(result)
# model = model.cuda()
# response, history = model.chat(tokenizer, "Hi", history=[], max_length=32)
# print(response)
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
os.system(f'ls -alh {save_path}')
create_repo(repo_id, exist_ok=True)
upload_folder(repo_id=repo_id, folder_path=save_path)