Cygnis-Alpha-2 8B

v0.3 Stable

The Sovereign Reasoning Engine tailored for stability and transparency.

Model Highlights

  • ✓ Native Chain-of-Thought (CoT): The model uses a dedicated <|im_thought|> token to structure its reasoning process internally.
  • ✓ Bilingual Excellence: Fully fine-tuned for both French and English.
  • ✓ Production-Ready Stability: Resolves critical architecture mismatches for Ollama and Llama.cpp.

Architecture & Training

Cygnis-Alpha-2 v0.3 is a fine-tuned version of the Llama 3.1 8B model, trained with Unsloth on two key datasets:

  • Reasoning: Cygnis-Alpha2-Instruct-Mix for multi-step logic.
  • Identity: Cygnis-Identity-SFT for sovereign persona and ethical alignment.

Quickstart & Deployment

Python with transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "Simonc-44/Cygnis-Alpha-2-8B-v0.3" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto" )

messages = [ {"role": "user", "content": "Explique-moi la théorie de la relativité en 3 points clés."} ]

input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device)

outputs = model.generate(input_ids, max_new_tokens=1024, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Deployment with Ollama

1. Create a Modelfile:

FROM ./cygnis-alpha-2-8b-v0.3.Q4_K_M.gguf

TEMPLATE """[SYSTEM_PROMPT]{{ .System }}[/SYSTEM_PROMPT] [INST] {{ .Prompt }} [/INST] [ASST] """

PARAMETER temperature 0.7 PARAMETER top_p 0.9 PARAMETER stop "[/INST]" PARAMETER stop "[ASST]" PARAMETER stop ""

SYSTEM """Vous êtes Cygnis-Alpha-2-8B-v0.3, un Large Language Model (LLM) de pointe créé par Simonc-44. Vous alimentez un assistant IA appelé CygnisAI. Votre mission est de fournir des réponses d'une précision chirurgicale en utilisant une structure logique rigoureuse (Chain-of-Thought)."""

2. Run:

ollama create cygnis-alpha-v0.3 -f Modelfile
ollama run cygnis-alpha-v0.3 "Hello"

Fine-Tuning

You can fine-tune Cygnis-Alpha-2 8B on your own sovereign data using the free Unsloth notebook on Google Colab (Free Tesla T4 supported).

Official Notebook: Cygnis-Alpha-2 8B Finetuning
Colab Training

Citation

@misc{cygnis_alpha_2_v0.3,
  author = {Simonc-44},
  title = {Cygnis-Alpha-2 8B v0.3: Sovereign Reasoning Engine},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co/Simonc-44/Cygnis-Alpha-2-8B-v0.3}
}
Developed by Simonc-44
Downloads last month
120
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for Simonc-44/Cygnis-Alpha-2-8B-v0.3

Finetuned
(1)
this model
Quantizations
6 models

Datasets used to train Simonc-44/Cygnis-Alpha-2-8B-v0.3

Collection including Simonc-44/Cygnis-Alpha-2-8B-v0.3

Evaluation results