Text Generation
Safetensors
English
llama
SinclairWang commited on
Commit
d6ea141
·
verified ·
1 Parent(s): 82029b8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ datasets:
4
+ - OctoThinker/MegaMath-Web-Pro-Max
5
+ - LLM360/MegaMath
6
+ language:
7
+ - en
8
+ base_model:
9
+ - meta-llama/Llama-3.2-1B
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ # [OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling](https://arxiv.org/abs/2506.20512)
14
+
15
+
16
+
17
+ ## OctoThinker-1B-Short-Zero
18
+
19
+
20
+ The OctoThinker family is built on carefully studied mid-training insights, starting from the Llama-3 family, to create a reinforcement learning–friendly base language model.
21
+
22
+
23
+
24
+ OctoThinker-1B-Short-Zero is trained using the R1-Zero-style reinforcement learning technique, starting from OctoThinker-1B-Short-Base without any supervised fine-tuning (SFT).
25
+
26
+
27
+ ### Training Recipe for OctoThinker-1B-Short-Base
28
+
29
+ <div style="display: flex; justify-content: left; gap: 20px;">
30
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/nQEwjHnW88XIThK8jHAA3.png" alt="Data Pipeline" style="width:90%;">
31
+ </div>
32
+
33
+
34
+
35
+
36
+ ### Evaluation Results of OctoThinker-1B-Base Series
37
+
38
+ Note that we adopt the few-shot prompting evaluation for these base language models.
39
+
40
+
41
+ <div style="display: flex; justify-content: left; gap: 20px;">
42
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/3n1cnG81wLjjPwzyMyQ-o.png" alt="Data Pipeline" style="width:80%;">
43
+ </div>
44
+
45
+
46
+
47
+ ### RL Training Dynamics of OctoThinker-1B-Zero Series
48
+
49
+ <div style="display: flex; justify-content: left; gap: 20px;">
50
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/xoiYwO-fsOCdPTKsASJNK.png" alt="Data Pipeline" style="width:80%;">
51
+ </div>
52
+
53
+
54
+
55
+ ### More about OctoThinker
56
+
57
+
58
+ <div style="display: flex; justify-content: left; gap: 20px;">
59
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/bn85CEB_DW6azJ7KJp11Q.png" alt="Data Pipeline" style="width:100%;">
60
+ </div>
61
+
62
+
63
+ ## Citation
64
+
65
+ Check out our [paper](https://arxiv.org/abs/2506.20512) for more details. If you use our models, and datasets or find our work useful, please cite
66
+
67
+ ```
68
+ @article{wang2025octothinker,
69
+ title={OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling},
70
+ author={Wang, Zengzhi and Zhou, Fan and Li, Xuefeng and Liu, Pengfei},
71
+ year={2025},
72
+ journal={arXiv preprint arXiv:2506.20512},
73
+ note={Preprint}
74
+ }
75
+ ```