YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

architecture: MistralForCausalLM
merge_method: arcee_multifusion
base_model: B:\24B\models--TheDrummer--Precog-24B-v1
models:
  - model: B:\24B\BeaverAI_Fallen-Mistral-Small-3.1-24B-v1e_textonly
  - model: B:\24B\models--Naphula--Slimaki-24B-v1
  - model: B:\24B\models--Casual-Autopsy--Maginum-Cydoms-24B
  - model: B:\24B\models--sophosympatheia--Magistry-24B-v1.0
parameters:
  # tukey_fence: 1.5 is standard (~12.5% salience).
  # We use 0.75 to increase the "Knowledge Injection" from donors to ~25%
  tukey_fence: 0.75
  
  # class SalienceMode
  # COMBINED = "combined"  # Add up salience from all donors
  # DIVIDED = "divided"    # Divide total salience by number of donors
  # AVERAGED = "averaged"  # Third Mode: Average the importance scores before thresholding
  # "averaged" gives more "Share of Voice" to models with larger task vectors
  salience_mode: "averaged"
  
  # normalize: true ensures that even if multiple models have salient 
  # changes in the same spot, the weights don't explode (Magnitude Inflation)
  # false works best with "combined" mode
  normalize: true

tokenizer:
  source: B:\24B\models--TheDrummer--Precog-24B-v1
# chat_template: auto (Removed to use Precog's native template)
dtype: float32
out_dtype: bfloat16
Downloads last month
175
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support