Quantized Olmo 3
Collection
Verified models. All compatible with vLLM for very fast inference. Use the 3.1 models as they are more recent.
•
23 items
•
Updated
•
2
This is allenai/Olmo-3.1-32B-Think quantized with LLM Compressor with the recipe in the "recipe.yaml" file. The model is compatible with vLLM (tested: v0.12.0). Tested with an RTX Pro 6000.
How the models perform (token efficiency, accuracy per domain, ...) and how to use them: Quantizing Olmo 3: Most Efficient and Accurate Formats
Subscribe to The Kaitchup. Or you can "buy me a kofi".
Base model
allenai/Olmo-3-1125-32B