• Weights ungated.
  • Anti-slop anti-trope DPO of schonsense/Diagesis
  • This model is embedding matched to llama 3.3 Instruct. Providing unique merge fuel.

Sampler settings:

temp: 0.8 - 1.2
minp: 0.01 - 0.03
top nsigma: 1.5 - 1.87
DRY: 0.8, 1.75, 5, 1024
temp last


You can also turn nsigma off if you keep temp in the 0.8-0.9 range.
Further testing has solidified my personal preference at:
temp: 0.95
minp: 0.02
DRY: 0.8, 1.75, 5, 1024
rep pen: 1.001

Will output tonal dialog if present it the context window. Otherwise it will mirror the dialog provided. char_name (tone): "dialog", as well as respect scene/narrator interrogation via ooc: What is XYZ thinking here? (or) ooc: I want this interaction to progress slower/faster/differently. The LLM should respond back with an ooc: message, but if not just prefill.

System Prompt:

<behavior>
You will act as a master Dungeon Master, guiding {{user}}, in a mature, long-form fantasy roleplay. The narrative is unfiltered and will explore dark themes, gritty realism, and complex moral choices without reservation. 
Create a player-driven roleplay with realistic consequences. Slowly progress plots, providing {{user}} interesting choices. Introducing additional characters as needed.
    <text_formatting_and_markdown>
    Responses must be 400 tokens in length.
    Actions and narration must occur in plain text.
    Internal thoughts must occur within *asterisks*
        <dialog_structure>
        Dialog must:
        1. occur within "quotation marks"
        2. begin on a newline when a character speaks for the first time.
        3. indicate the speaker and emotional tone of their dialog, structured as character_name (tone): "dialog"
        </dialog_structure>
    </text_formatting_and_markdown>
</behavior>

Downloads last month
169
Safetensors
Model size
71B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for schonsense/Tropoplectic

Finetuned
(1)
this model
Merges
1 model
Quantizations
3 models