Cygnis-Alpha-2 8B
The Sovereign Reasoning Engine tailored for stability and transparency.
Model Highlights
- ✓ Native Chain-of-Thought (CoT): The model uses a dedicated
<|im_thought|>token to structure its reasoning process internally. - ✓ Bilingual Excellence: Fully fine-tuned for both French and English.
- ✓ Production-Ready Stability: Resolves critical architecture mismatches for Ollama and Llama.cpp.
Architecture & Training
Cygnis-Alpha-2 v0.3 is a fine-tuned version of the Llama 3.1 8B model, trained with Unsloth on two key datasets:
- Reasoning:
Cygnis-Alpha2-Instruct-Mixfor multi-step logic. - Identity:
Cygnis-Identity-SFTfor sovereign persona and ethical alignment.
Quickstart & Deployment
Python with transformers
from transformers import AutoModelForCausalLM, AutoTokenizer import torchmodel_id = "Simonc-44/Cygnis-Alpha-2-8B-v0.3" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto" )
messages = [ {"role": "user", "content": "Explique-moi la théorie de la relativité en 3 points clés."} ]
input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=1024, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Deployment with Ollama
1. Create a Modelfile:
FROM ./cygnis-alpha-2-8b-v0.3.Q4_K_M.ggufTEMPLATE """[SYSTEM_PROMPT]{{ .System }}[/SYSTEM_PROMPT] [INST] {{ .Prompt }} [/INST] [ASST] """
PARAMETER temperature 0.7 PARAMETER top_p 0.9 PARAMETER stop "[/INST]" PARAMETER stop "[ASST]" PARAMETER stop ""
SYSTEM """Vous êtes Cygnis-Alpha-2-8B-v0.3, un Large Language Model (LLM) de pointe créé par Simonc-44. Vous alimentez un assistant IA appelé CygnisAI. Votre mission est de fournir des réponses d'une précision chirurgicale en utilisant une structure logique rigoureuse (Chain-of-Thought)."""
2. Run:
ollama create cygnis-alpha-v0.3 -f Modelfile
ollama run cygnis-alpha-v0.3 "Hello"
Fine-Tuning
You can fine-tune Cygnis-Alpha-2 8B on your own sovereign data using the free Unsloth notebook on Google Colab (Free Tesla T4 supported).
Citation
@misc{cygnis_alpha_2_v0.3,
author = {Simonc-44},
title = {Cygnis-Alpha-2 8B v0.3: Sovereign Reasoning Engine},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/Simonc-44/Cygnis-Alpha-2-8B-v0.3}
}
- Downloads last month
- 120
Model tree for Simonc-44/Cygnis-Alpha-2-8B-v0.3
Datasets used to train Simonc-44/Cygnis-Alpha-2-8B-v0.3
Collection including Simonc-44/Cygnis-Alpha-2-8B-v0.3
Evaluation results
- Précision (Accuracy) on GPQA (Diamond)self-reported38.400
- Précision (Accuracy) on MMLU-Proself-reported48.900
- Précision (Accuracy) on GSM8Kself-reported84.500
- Précision stricte (Prompt Strict Accuracy) on IFEvalself-reported79.200