Datasets:
Leaderboard update
Browse files
README.md
CHANGED
|
@@ -121,23 +121,27 @@ Top models evaluated on this validation dataset:
|
|
| 121 |
| Model | Overall | img→text | img→markdown | Grounding | KIE (JSON) | VQA |
|
| 122 |
|-------|---------|----------|--------------|-----------|------------|-----|
|
| 123 |
| **Gemini-2.5-pro** | **0.682** | 0.836 | 0.745 | 0.084 | 0.891 | 0.853 |
|
|
|
|
| 124 |
| **Gemini-2.5-flash** | **0.644** | 0.796 | 0.683 | 0.067 | 0.841 | 0.833 |
|
| 125 |
| **gpt-4.1-mini** | **0.643** | 0.866 | 0.724 | 0.091 | 0.750 | 0.782 |
|
| 126 |
| **Claude-4.5-Sonnet** | **0.639** | 0.723 | 0.676 | 0.377 | 0.728 | 0.692 |
|
| 127 |
-
|
|
|
|
|
| 128 |
| Qwen2.5-VL-72B | 0.631 | 0.848 | 0.712 | 0.220 | 0.644 | 0.732 |
|
| 129 |
| gpt-5-mini (responses) | 0.594 | 0.743 | 0.567 | 0.118 | 0.811 | 0.731 |
|
| 130 |
| Qwen3-VL-30B-A3B | 0.589 | 0.802 | 0.688 | 0.053 | 0.661 | 0.743 |
|
| 131 |
| gpt-4.1 | 0.587 | 0.709 | 0.693 | 0.086 | 0.662 | 0.784 |
|
| 132 |
-
| Qwen3-VL-32B | 0.585 | 0.732 | 0.646 | 0.054 | 0.724 | 0.770 |
|
| 133 |
| Qwen3-VL-30B-A3B-FP8 | 0.583 | 0.798 | 0.683 | 0.056 | 0.638 | 0.740 |
|
| 134 |
| Qwen2.5-VL-32B | 0.577 | 0.767 | 0.649 | 0.232 | 0.493 | 0.743 |
|
| 135 |
| gpt-5 (responses) | 0.573 | 0.746 | 0.650 | 0.080 | 0.687 | 0.704 |
|
| 136 |
| Qwen2.5-VL-7B | 0.549 | 0.779 | 0.704 | 0.185 | 0.426 | 0.651 |
|
|
|
|
| 137 |
| gpt-4.1-nano | 0.503 | 0.676 | 0.672 | 0.028 | 0.567 | 0.573 |
|
| 138 |
| gpt-5-nano | 0.503 | 0.487 | 0.583 | 0.091 | 0.661 | 0.693 |
|
| 139 |
-
| Qwen3-VL-2B | 0.439 | 0.592 | 0.613 | 0.029 | 0.356 | 0.605 |
|
| 140 |
| Qwen2.5-VL-3B | 0.402 | 0.613 | 0.654 | 0.045 | 0.203 | 0.494 |
|
|
|
|
| 141 |
|
| 142 |
*Scale: 0.0 - 1.0 (higher is better)*
|
| 143 |
|
|
|
|
| 121 |
| Model | Overall | img→text | img→markdown | Grounding | KIE (JSON) | VQA |
|
| 122 |
|-------|---------|----------|--------------|-----------|------------|-----|
|
| 123 |
| **Gemini-2.5-pro** | **0.682** | 0.836 | 0.745 | 0.084 | 0.891 | 0.853 |
|
| 124 |
+
| **Alice AI VLM dev** | **0.646** | 0.885 | 0.776 | 0.060 | 0.729 | 0.781 |
|
| 125 |
| **Gemini-2.5-flash** | **0.644** | 0.796 | 0.683 | 0.067 | 0.841 | 0.833 |
|
| 126 |
| **gpt-4.1-mini** | **0.643** | 0.866 | 0.724 | 0.091 | 0.750 | 0.782 |
|
| 127 |
| **Claude-4.5-Sonnet** | **0.639** | 0.723 | 0.676 | 0.377 | 0.728 | 0.692 |
|
| 128 |
+
| Cotype VL (32B 8 bit) | 0.639 | 0.797 | 0.756 | 0.262 | 0.694 | 0.685 |
|
| 129 |
+
| gpt-5-mini | 0.632 | 0.797 | 0.678 | 0.126 | 0.784 | 0.776 |
|
| 130 |
| Qwen2.5-VL-72B | 0.631 | 0.848 | 0.712 | 0.220 | 0.644 | 0.732 |
|
| 131 |
| gpt-5-mini (responses) | 0.594 | 0.743 | 0.567 | 0.118 | 0.811 | 0.731 |
|
| 132 |
| Qwen3-VL-30B-A3B | 0.589 | 0.802 | 0.688 | 0.053 | 0.661 | 0.743 |
|
| 133 |
| gpt-4.1 | 0.587 | 0.709 | 0.693 | 0.086 | 0.662 | 0.784 |
|
| 134 |
+
| Qwen3-VL-32B-Instruct | 0.585 | 0.732 | 0.646 | 0.054 | 0.724 | 0.770 |
|
| 135 |
| Qwen3-VL-30B-A3B-FP8 | 0.583 | 0.798 | 0.683 | 0.056 | 0.638 | 0.740 |
|
| 136 |
| Qwen2.5-VL-32B | 0.577 | 0.767 | 0.649 | 0.232 | 0.493 | 0.743 |
|
| 137 |
| gpt-5 (responses) | 0.573 | 0.746 | 0.650 | 0.080 | 0.687 | 0.704 |
|
| 138 |
| Qwen2.5-VL-7B | 0.549 | 0.779 | 0.704 | 0.185 | 0.426 | 0.651 |
|
| 139 |
+
| Qwen3-VL-4B-Instruct | 0.535 | 0.707 | 0.685 | 0.063 | 0.546 | 0.675 |
|
| 140 |
| gpt-4.1-nano | 0.503 | 0.676 | 0.672 | 0.028 | 0.567 | 0.573 |
|
| 141 |
| gpt-5-nano | 0.503 | 0.487 | 0.583 | 0.091 | 0.661 | 0.693 |
|
| 142 |
+
| Qwen3-VL-2B-Instruct | 0.439 | 0.592 | 0.613 | 0.029 | 0.356 | 0.605 |
|
| 143 |
| Qwen2.5-VL-3B | 0.402 | 0.613 | 0.654 | 0.045 | 0.203 | 0.494 |
|
| 144 |
+
| Pixtral-12B-2409 | 0.342 | 0.327 | 0.555 | 0.026 | 0.325 | 0.475 |
|
| 145 |
|
| 146 |
*Scale: 0.0 - 1.0 (higher is better)*
|
| 147 |
|