LFM2.5-1.2B-JP

LFM2.5-1.2B-JP is a chat model specifically optimized for Japanese. While LFM2 already supported Japanese as one of eight languages, LFM2.5-JP pushes state-of-the-art on Japanese knowledge and instruction-following at its scale. This model is ideal for developers building Japanese-language applications where cultural and linguistic nuance matter.

Find more information about LFM2.5 in our blog post.

🏃 Inference

LFM2.5 is supported by many inference frameworks. See the Inference documentation for the full list.

Name Description Docs Notebook
Transformers Simple inference with direct access to model internals. Link Colab link
vLLM High-throughput production deployments with GPU. Link Colab link
llama.cpp Cross-platform inference with CPU offloading. Link Colab link

Here's a quick start example with transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

model_id = "LiquidAI/LFM2.5-1.2B-JP"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    dtype="bfloat16",
#   attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = "What is C. elegans?"

input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    add_generation_prompt=True,
    return_tensors="pt",
    tokenize=True,
).to(model.device)

output = model.generate(
    input_ids,
    do_sample=True,
    temperature=0.3,
    min_p=0.15,
    repetition_penalty=1.05,
    max_new_tokens=512,
    streamer=streamer,
)

🔧 Fine-Tuning

We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.

Name Description Docs Notebook
SFT (Unsloth) Supervised Fine-Tuning with LoRA using Unsloth. Link Colab link
SFT (TRL) Supervised Fine-Tuning with LoRA using TRL. Link Colab link
DPO (TRL) Direct Preference Optimization with LoRA using TRL. Link Colab link

📊 Performance

Model JMMLU M-IFEval (ja) GSM8K (ja)
LFM2.5-1.2B-JP 50.7 58.1 56.0
LFM2.5-1.2B-Instruct 47.7 41.8 46.8
Qwen3-1.7B (Instruct mode) 47.7 40.3 46.0
Llama 3.2 1B Instruct 34.0 24.1 25.2
TinySwallow-1.5B-Instruct 48.0 36.5 47.2
Gemma-2-Llama-Swallow-2b-it-v0.1 48.1 33.4 34.4
Gemma-3-1b-it 34.5 26.3 33.6
Granite-4.0-h-1b 42.2 39.3 42.8
Sarashina2.2-1b-instruct-v0.1 40.2 21.9 44.4

Evaluation Notes

  • All results are zero-shot evaluations using greedy decoding.
  • M-IFEval (ja) scores correspond to the loose evaluation setting.
  • JMMLU was evaluated using a prompt format in a similar style to the ArtificialAnalysis methodology (with corresponding parsing logic). The Japanese prompt template used is shown below:
PROMPT_TEMPLATE = """与えられた選択問題に答えてください。回答の最後の行に「答え:{valid_options}」のように出力してください(例:「答え:X」)。

{question}

{options}"""

Contact

For enterprise solutions and edge deployment, contact [email protected].

Citation

@article{liquidai2025lfm2,
  title={LFM2 Technical Report},
  author={Liquid AI},
  journal={arXiv preprint arXiv:2511.23404},
  year={2025}
}
Downloads last month
1,324
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LiquidAI/LFM2.5-1.2B-JP

Finetuned
(6)
this model
Finetunes
2 models
Quantizations
17 models

Collection including LiquidAI/LFM2.5-1.2B-JP

Papers for LiquidAI/LFM2.5-1.2B-JP