mvasiliniuc/iva-swift-codeint-clean-train
Viewer • Updated • 320k • 72 • 2
How to use mvasiliniuc/iva-codeint-swift-small with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="mvasiliniuc/iva-codeint-swift-small") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mvasiliniuc/iva-codeint-swift-small")
model = AutoModelForCausalLM.from_pretrained("mvasiliniuc/iva-codeint-swift-small")How to use mvasiliniuc/iva-codeint-swift-small with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "mvasiliniuc/iva-codeint-swift-small"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mvasiliniuc/iva-codeint-swift-small",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/mvasiliniuc/iva-codeint-swift-small
How to use mvasiliniuc/iva-codeint-swift-small with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "mvasiliniuc/iva-codeint-swift-small" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mvasiliniuc/iva-codeint-swift-small",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "mvasiliniuc/iva-codeint-swift-small" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mvasiliniuc/iva-codeint-swift-small",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use mvasiliniuc/iva-codeint-swift-small with Docker Model Runner:
docker model run hf.co/mvasiliniuc/iva-codeint-swift-small
iva-codeint-swift-small GPT-2 is (small version - 239.4M parameters) trained from scratch to obtain results in the text-to-code task tailored for Swift language used in native mobile development (iOS).
from transformers import pipeline
pipe = pipeline("text-generation", model="mvasiliniuc/iva-codeint-swift-small")
outputs = pipe("func triggerNSNotification")
API_URL = "https://api-inference.huggingface.co/models/mvasiliniuc/iva-codeint-swift-small"
headers = {"Authorization": "Bearer <key>"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": """
/*
A function that gets the current device operating system.
*/
"""
})
pprint.pprint(output, compact=True)
| Config | Value |
|---|---|
| seq length | 1024 |
| weight decay | 0.1 |
| learning rate | 0.0005 |
| max eval steps | -1 |
| shuffle buffer | 10000 |
| max train steps | 150000 |
| mixed precision | fp16 |
| num warmup steps | 2000 |
| train batch size | 5 |
| valid batch size | 5 |
| lr scheduler type | cosine |
| save checkpoint steps | 15000 |
| gradient checkpointing | false |
| gradient accumulation steps | 1 |
Resources used for research: