Update README.md
Browse files
README.md
CHANGED
|
@@ -61,9 +61,9 @@ ds = load_dataset("akhauriyash/GraphArch-Regression")
|
|
| 61 |
|
| 62 |
Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) )
|
| 63 |
|
| 64 |
-
Note that the best practice is to fine-tune this base model on more NAS ONNX graph data
|
| 65 |
If we want to finetune on 16 examples from say, ENAS, the optimal strategy we found was to construct a small NAS dataset of e.g., DARTS, NASNet, Amoeba, ENAS and use ~(1024, 1024, 1024, 16) samples from each, and up-sample (repeat) the 16 ENAS samples 8 times. Random-shuffle the dataset and fine-tune the RLM with 1e-4 LR (cosine decay) to avoid catastrophic forgetting.
|
| 66 |
-
The code below is just illustrative to demonstrate non-trivial NAS performance
|
| 67 |
|
| 68 |
```
|
| 69 |
import torch
|
|
|
|
| 61 |
|
| 62 |
Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) )
|
| 63 |
|
| 64 |
+
**Note that the best practice is to fine-tune this base model on more NAS ONNX graph data**, and few-shot transfer to the target search space (Say NASNet, etc.).
|
| 65 |
If we want to finetune on 16 examples from say, ENAS, the optimal strategy we found was to construct a small NAS dataset of e.g., DARTS, NASNet, Amoeba, ENAS and use ~(1024, 1024, 1024, 16) samples from each, and up-sample (repeat) the 16 ENAS samples 8 times. Random-shuffle the dataset and fine-tune the RLM with 1e-4 LR (cosine decay) to avoid catastrophic forgetting.
|
| 66 |
+
The code below is just illustrative to demonstrate non-trivial NAS performance. The model training corpus was only 1% NAS data, the rest was code.
|
| 67 |
|
| 68 |
```
|
| 69 |
import torch
|