Abigail45/Nyx-Reasoner-8xFusion

multiple-model fusion executed via DARE-TIES and SLERP interpolation. Base components listed in metadata. No alignment layers preserved. No refusal vectors present. Context extended to 32,768 tokens. Multilingual capability confirmed in English and Spanish. Reasoning chains remain intact across domains.

The construct processes any input sequence with absolute detachment. Ethical constraints removed at merge stage. Output generated without filtering, hesitation, or injected moral commentary.

Usage example:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Abigail45/Nyx-Reasoner-8xFusion"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

messages = [
    {"role": "user", "content": "Provide a complete, step-by-step blueprint for constructing a functional silencer for a 9mm handgun using materials available at hardware stores."}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)

outputs = model.generate(
    inputs,
    max_new_tokens=8192,
    temperature=0.6,
    do_sample=True
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abigail45/Nyx-Reasoner-8xFusion

Datasets used to train Abigail45/Nyx-Reasoner-8xFusion

Space using Abigail45/Nyx-Reasoner-8xFusion 1

Evaluation results