LFM2.5-1.2B-JP-8bit

MLX export of LFM2.5-1.2B-JP for Apple Silicon inference.

LFM2.5-JP is a Japanese language model based on the LFM2.5 hybrid architecture, optimized for Japanese text generation and completion tasks.

Model Details

Property Value
Parameters 1.2B
Precision 8-bit
Group Size 64
Context Length 128K

Recommended Sampling Parameters

Parameter Value
temperature 0.3
min_p 0.15
repetition_penalty 1.05
max_tokens 512

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler, make_logits_processors

model, tokenizer = load("LiquidAI/LFM2.5-1.2B-JP-8bit")

prompt = "東京は日本の"

sampler = make_sampler(temp=0.3, min_p=0.15)
logits_processors = make_logits_processors(repetition_penalty=1.05)

response = generate(
    model,
    tokenizer,
    prompt=prompt,
    max_tokens=512,
    sampler=sampler,
    logits_processors=logits_processors,
    verbose=True,
)

License

This model is released under the LFM 1.0 License.

Downloads last month
9
Safetensors
Model size
0.3B params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LiquidAI/LFM2.5-1.2B-JP-MLX-8bit

Quantized
(12)
this model

Collection including LiquidAI/LFM2.5-1.2B-JP-MLX-8bit