ruaruaxu commited on
Commit
8dbf7a5
·
verified ·
1 Parent(s): 4265110

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -1,3 +1,61 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - video
9
+ - text
10
+ - embodied
11
+ - spatial
12
+ - multimodal
13
+ size_categories:
14
+ - n<1K
15
+ ---
16
+
17
+ # BasicSpatialAbility.code
18
+ [![Published Paper](https://img.shields.io/badge/Published-ACL_Paper-red)](https://aclanthology.org/2025.acl-long.567/)
19
+ [![Arxiv](https://img.shields.io/badge/arXiv-2502.11859-darkred?logo=arxiv)](https://arxiv.org/abs/2502.11859)
20
+ [![Code](https://img.shields.io/badge/Github-Code-blue?logo=github)](https://github.com/EmbodiedCity/BasicSpatialAbility.code)
21
+ [![Dataset](https://img.shields.io/badge/Hugging_Face-Dataset-yellow?logo=huggingface)](https://huggingface.co/datasets/EmbodiedCity/BasicSpatialAbility)
22
+
23
+ The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial Orientation, Mental Rotation, and Spatial Visualization. Benchmarking 13 mainstream VLMs through nine validated psychometric experiments reveals significant gaps versus humans, with three key findings: 1) VLMs mirror human hierarchies (strongest in 2D orientation, weakest in 3D rotation) with independent BSAs; 2) Many smaller models surpass larger counterparts, with Qwen leading and InternVL2 lagging; 3) Interventions like CoT and few-shot training show limits from architectural constraints, while ToT demonstrates the most effective enhancement. Identified barriers include weak geometry encoding and missing dynamic simulation. By linking Psychometrics to VLMs, we provide a comprehensive BSA evaluation benchmark, a methodological perspective for embodied AI development, and a cognitive science-informed roadmap for achieving human-like spatial intelligence.
24
+
25
+ | Type | Definition | Tests |
26
+ |:----------------------:|:---------------------------------------------------------------------------------------------------------------------:|:-----------------------:|
27
+ | Spatial Perception | The ability to perceive horizontal and vertical orientations without interference from miscellaneous information. | SVT |
28
+ | Spatial Relation | The ability of recognizing relationships between parts of an entity. | NCIT DAT:SR R-Cube-SR |
29
+ | Spatial Orientation | The ability to navigate or enter a given spatial state. | MRMT |
30
+ | Mental Rotation | The ability to mentally rotate 3D objects. | MRT PSVT:R |
31
+ | Spatial Visualization | The ability to mentally manipulate and transform 2D and 3D objects. | SBST R-Cube-Vis |****
32
+
33
+ <p align="center">
34
+ <img width="600" src="https://github.com/EmbodiedCity/BasicSpatialAbility.code/raw/main/framework.jpg">
35
+ </p>
36
+
37
+ <p align="center">
38
+ The Framework of Basic Spatial Abilities (Image sources are cited in the paper)
39
+ </p>
40
+
41
+ # Citation
42
+ If you use this project in your research, please cite the following paper:
43
+ ```bibtex
44
+ @inproceedings{xu-etal-2025-defining,
45
+ title = "Defining and Evaluating Visual Language Models' Basic Spatial Abilities: A Perspective from Psychometrics",
46
+ author = "Xu, Wenrui and
47
+ Lyu, Dalin and
48
+ Wang, Weihang and
49
+ Feng, Jie and
50
+ Gao, Chen and
51
+ Li, Yong",
52
+ booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
53
+ month = jul,
54
+ year = "2025",
55
+ address = "Vienna, Austria",
56
+ publisher = "Association for Computational Linguistics",
57
+ url = "https://aclanthology.org/2025.acl-long.567/",
58
+ doi = "10.18653/v1/2025.acl-long.567",
59
+ pages = "11571--11590",
60
+ ISBN = "979-8-89176-251-0"
61
+ }