CAT-Translate-0.8b MLX q4
This repository provides MLX quantized weights (q4) converted from the original model.
Original model: cyberagent/CAT-Translate-0.8b
Quantization: MLX q4 (4-bit).
- Downloads last month
- 27
Model size
0.1B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for hotchpotch/CAT-Translate-0.8b-mlx-q4
Base model
sbintuitions/sarashina2.2-0.5b
Finetuned
cyberagent/CAT-Translate-0.8b