Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ tags: []
|
|
| 6 |
# AP-MAE-SC2-15B
|
| 7 |
This Model is currently anonymized during the paper review process.
|
| 8 |
|
| 9 |
-
The AP-MAE transformer model design and configuration is available
|
| 10 |
|
| 11 |
This version of AP-MAE is trained on attention heads generated by StarCoder2-15B during inference. The inference task used for generating attention outputs is FiM token prediction for a random 3-10 length masked section of Java code, with exactly 256 tokens of surrounding context.
|
| 12 |
|
|
|
|
| 6 |
# AP-MAE-SC2-15B
|
| 7 |
This Model is currently anonymized during the paper review process.
|
| 8 |
|
| 9 |
+
The AP-MAE transformer model design and configuration is available in the reproduction package attached to the submission
|
| 10 |
|
| 11 |
This version of AP-MAE is trained on attention heads generated by StarCoder2-15B during inference. The inference task used for generating attention outputs is FiM token prediction for a random 3-10 length masked section of Java code, with exactly 256 tokens of surrounding context.
|
| 12 |
|