⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Mistral Tekken chat template.

🐌 Ślimaki-24B-v1.2

This merge has zero refusals (confirmed), no ablation needed.

This is a merge of pre-trained language models created using mergekit.

Ślimaki v1.2 should be similar to v1 but more creative. It has additional "spice injection".

Merge Details

Merge Methods

This model was merged using the following merge method:

Note: This merge was heavily inspired by Maginum Cydoms

Configuration

architecture: MistralForCausalLM
models:
  - model: B:\24B\!models--anthracite-core--Mistral-Small-3.2-24B-Instruct-2506-Text-Only
  - model: B:\24B\!models--TheDrummer--Cydonia-24B-v4.3
    parameters:
      density: 0.75
      weight: 0.5
      epsilon: 0.25
  - model: B:\24B\!models--ReadyArt--4.2.0-Broken-Tutu-24b
    parameters:
      density: 0.75
      weight: 0.25
      epsilon: 0.25
  - model: B:\24B\PrivateMerge29 # This merge is no longer available on HF
    parameters:
      density: 0.75
      weight: 0.25
      epsilon: 0.25
  - model: B:\24B\!models--zerofata--MS3.2-PaintedFantasy-v2-24B
    parameters:
      density: 0.75
      weight: 0.5
      epsilon: 0.25   
  - model: B:\24B\!models--TheDrummer--Magidonia-24B-v4.3
    parameters:
      density: 0.75
      weight: 0.5
      epsilon: 0.25
  - model: B:\24B\!models--TheDrummer--Precog-24B-v1
    parameters:
      density: 0.75
      weight: 0.5
      epsilon: 0.25
  - model: B:\24B\!models--zerofata--MS3.2-PaintedFantasy-v3-24B
    parameters:
      density: 0.75
      weight: 0.5
      epsilon: 0.25
## Merge Settings
## --copy-tokenizer --allow-crimes --out-shard-size 5B --trust-remote-code --lazy-unpickle --random-seed 420 --cuda
merge_method: della
base_model: B:\24B\!models--anthracite-core--Mistral-Small-3.2-24B-Instruct-2506-Text-Only
parameters:
  lambda: 1.0
  normalize: false
  int8_mask: false
  rescale: true
dtype: float32
out_dtype: bfloat16
tokenizer:
  source: union
chat_template: auto
name: 🐌 Ślimaki-24B-v1.2

Note: The only custom script needed for Slimaki to merge is change sparsify.py to auto-shrink Epsilon

Before

def della_magprune(
    tensor: torch.Tensor,
    density: float,
    epsilon: float,
    rescale_norm: Optional[RescaleNorm] = None,
) -> torch.Tensor:
    if density >= 1:
        return tensor
    if density <= 0:
        return torch.zeros_like(tensor)
    orig_shape = tensor.shape

    if density + epsilon >= 1 or density - epsilon <= 0:
        raise ValueError(
            "Epsilon must be chosen such that density +/- epsilon is in (0, 1)"
        )

    work_dtype = (
        tensor.dtype
        if tensor.device.type != "cpu" or tensor.dtype == torch.bfloat16
        else torch.float32
    )

    if len(tensor.shape) < 2:
        tensor = tensor.unsqueeze(0)
    magnitudes = tensor.abs()

    sorted_indices = torch.argsort(magnitudes, dim=1, descending=False)
    ranks = sorted_indices.argsort(dim=1).to(work_dtype) + 1

    min_ranks = ranks.min(dim=1, keepdim=True).values
    max_ranks = ranks.max(dim=1, keepdim=True).values
    rank_norm = ((ranks - min_ranks) / (max_ranks - min_ranks)).clamp(0, 1)
    probs = (density - epsilon) + rank_norm * 2 * epsilon
    mask = torch.bernoulli(probs).to(work_dtype)

    res = rescaled_masked_tensor(tensor.to(work_dtype), mask, rescale_norm)
    return res.to(tensor.dtype).reshape(orig_shape)

After

def della_magprune(
    tensor: torch.Tensor,
    density: float,
    epsilon: float,
    rescale_norm: Optional[RescaleNorm] = None,
) -> torch.Tensor:
    if density >= 1:
        return tensor
    if density <= 0:
        return torch.zeros_like(tensor)
    
    # --- SAFETY GUARD START ---
    # Ensure density isn't exactly 0 or 1
    density = max(1e-4, min(1.0 - 1e-4, density))
    
    # Epsilon must be < density AND < (1 - density)
    # If the optimizer guessed a bad epsilon, we shrink it to the max allowed value
    max_epsilon = min(density, 1.0 - density) - 1e-4
    if abs(epsilon) > max_epsilon:
        epsilon = max_epsilon if epsilon > 0 else -max_epsilon
    # --- SAFETY GUARD END ---

    orig_shape = tensor.shape
    work_dtype = (
        tensor.dtype
        if tensor.device.type != "cpu" or tensor.dtype == torch.bfloat16
        else torch.float32
    )

    if len(tensor.shape) < 2:
        tensor = tensor.unsqueeze(0)
    magnitudes = tensor.abs()

    sorted_indices = torch.argsort(magnitudes, dim=1, descending=False)
    ranks = sorted_indices.argsort(dim=1).to(work_dtype) + 1

    min_ranks = ranks.min(dim=1, keepdim=True).values
    max_ranks = ranks.max(dim=1, keepdim=True).values
    rank_norm = ((ranks - min_ranks) / (max_ranks - min_ranks)).clamp(0, 1)
    
    # Now this line is guaranteed not to produce values < 0 or > 1
    probs = (density - epsilon) + rank_norm * 2 * epsilon
    mask = torch.bernoulli(probs.clamp(0, 1)).to(work_dtype)

    res = rescaled_masked_tensor(tensor.to(work_dtype), mask, rescale_norm)
    return res.to(tensor.dtype).reshape(orig_shape)
Downloads last month
1,192
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Naphula/Slimaki-24B-v1.2

Collections including Naphula/Slimaki-24B-v1.2

Paper for Naphula/Slimaki-24B-v1.2