Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
shadow-clown-7B-dare is a DARE merge of the following models using mergekit:
See the paper Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch for more on the method.
models:
- model: yam-peleg/Experiment26-7B
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
density: 0.52
weight: 0.4
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.52
weight: 0.2
- model: CorticalStack/neurotic-crown-clown-7b-ties
parameters:
density: 0.52
weight: 0.3
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
int8_mask: true
dtype: bfloat16