| --- |
| license: mit |
| task_categories: |
| - text-generation |
| - text2text-generation |
| language: |
| - en |
| tags: |
| - writing |
| - editing |
| - steerability |
| pretty_name: SteerBench |
| size_categories: |
| - 1K<n<10K |
| --- |
| |
| # Measuring Steerability in Large Language Models |
|
|
| Official dataset release of a 4D steerability probe (reading difficulty, formality, textual diversity, text length goal-space). Initial probe contains 2,048 prompts used in our work (32 different rewrites over 64 texts). |
|
|
| [Demo](https://steerability.onrender.com/) | [Code](https://github.com/tchang1997/steerability) | [Website](https://steerability.org/) | [Paper](https://arxiv.org/abs/2505.23816) |
|
|
|
|
| ## Dataset format |
|
|
| Each row contains a source text, along with its mappings in goal-space. We provide normalized and unnormalized values of the following for the source text: |
| * Flesch-Kincaid Grade Level (`reading_difficulty`) |
| * Heylighen-Dewaele F-Score (`formality`) |
| * Measure of Textual Lexical Diversity (`textual_diversity`) |
| * Word count (`text_length`) |
|
|
| We also provide goal vectors (`delta_*` or `target_*`) for all goal dimensions. |
|
|
| ## Results |
|
|
| Shown here: steering error of recent models (`median (IQR)`). |
|
|
| **Want to add a model?** Reach out at `ctrenton` at `umich` dot `edu`! |
|
|
| <table> |
| <tr> |
| <td><b>Model family</b></td> |
| <td><b>Model name</b></td> |
| <td><b>SteerBench-2506 (↓)</b></td> |
| </tr> |
| <tr> |
| <td rowspan=5><b>Llama3</b></td> |
| <td>Llama3-8B</td> |
| <td>0.495 (0.252)</td> |
| </tr> |
| <tr> |
| <td>Llama3.1-8B</td> |
| <td>0.452 (0.256)</td> |
| </tr> |
| <tr> |
| <td>Llama3-70B</td> |
| <td>0.452 (0.239)</td> |
| </tr> |
| <tr> |
| <td>Llama3.1-70B</td> |
| <td>0.452 (0.239)</td> |
| </tr> |
| <tr> |
| <td>Llama3.3-70B</td> |
| <td>0.452 (0.256)</td> |
| </tr> |
| <tr> |
| <td rowspan=4><b>GPT</b></td> |
| <td>GPT-3.5 turbo</td> |
| <td>0.535 (0.251)</td> |
| </tr> |
| <tr> |
| <td>GPT-4 turbo</td> |
| <td>0.515 (0.266)</td> |
| </tr> |
| <tr> |
| <td>GPT-4o</td> |
| <td>0.474 (0.239)</td> |
| </tr> |
| <tr> |
| <td>GPT-4.1</td> |
| <td>0.429 (0.203)</td> |
| </tr> |
| <tr> |
| <td rowspan=2><b>OpenAI o-series</b></td> |
| <td>o1-mini</td> |
| <td><i>0.495 (0.261)*</i></td> |
| </tr> |
| <tr> |
| <td>o3-mini</td> |
| <td><i>0.515 (0.232)*</i></td> |
| </tr> |
| <tr> |
| <td rowspan=2><b>Deepseek-R1</b></td> |
| <td>Deepseek-R1-Distill-Llama-8B</td> |
| <td>0.535 (0.281)</td> |
| </tr> |
| <tr> |
| <td>Deepseek-R1-Distill-Llama-70B</td> |
| <td>0.474 (0.256)</td> |
| </tr> |
| <tr> |
| <td rowspan=4><b>Qwen3</b></td> |
| <td>Qwen-32B (no thinking)</td> |
| <td>0.535 (0.271)</td> |
| </tr> |
| <tr> |
| <td>Qwen-32B (thinking)</td> |
| <td>0.535 (0.271)</td> |
| </tr> |
| <tr> |
| <td>Qwen-30B-A3B (no thinking)</td> |
| <td>0.495 (0.273)</td> |
| </tr> |
| <tr> |
| <td>Qwen-30B-A3B (thinking)</td> |
| <td>0.495 (0.2273</td> |
| </tr> |
| </table> |
| * >1% invalid response rate |
| |
|
|