Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'bbox', 'sent'}) and 2 missing columns ({'conversations', 'id'}).

This happened while the json dataset builder was generating data using

hf://datasets/JamesZGQ/EGM_Datasets/metadata/eval/refcoco+_testA.jsonl (at revision 3c4ef38d5ea7b81808a7738c141df7a917a02dbe)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              image: string
              sent: string
              bbox: list<item: double>
                child 0, item: double
              height: int64
              width: int64
              to
              {'id': Value('int64'), 'image': Value('string'), 'height': Value('int64'), 'width': Value('int64'), 'conversations': List({'from': Value('string'), 'value': Value('string')})}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1334, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 911, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'bbox', 'sent'}) and 2 missing columns ({'conversations', 'id'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/JamesZGQ/EGM_Datasets/metadata/eval/refcoco+_testA.jsonl (at revision 3c4ef38d5ea7b81808a7738c141df7a917a02dbe)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

id
int64
image
string
height
int64
width
int64
conversations
list
0
coco_flip/train2014/COCO_train2014_000000581857.jpg
640
427
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>woman in gray shirt facing camera on left</ref>" }, { "from": "gpt", "value": "<ref>woman in gray shirt facing camera on left</ref><box>[304, 408, 495, 739]</box>" } ]
1
coco_flip/train2014/COCO_train2014_000000581839.jpg
640
466
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>standing</ref>" }, { "from": "gpt", "value": "<ref>standing</ref><box>[453, 142, 633, 526]</box>" } ]
2
coco_flip/train2014/COCO_train2014_000000581739.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left vase</ref>" }, { "from": "gpt", "value": "<ref>left vase</ref><box>[244, 307, 414, 925]</box>" } ]
3
coco_flip/train2014/COCO_train2014_000000581686.jpg
425
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>right bird</ref>" }, { "from": "gpt", "value": "<ref>right bird</ref><box>[358, 617, 573, 998]</box>" } ]
4
coco_flip/train2014/COCO_train2014_000000581657.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>second laptop from front left center</ref>" }, { "from": "gpt", "value": "<ref>second laptop from front left center</ref><box>[4, 323, 541, 612]</box>" } ]
5
coco_flip/train2014/COCO_train2014_000000581657.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>laptop second from front</ref>" }, { "from": "gpt", "value": "<ref>laptop second from front</ref><box>[4, 323, 541, 612]</box>" } ]
6
coco_flip/train2014/COCO_train2014_000000581657.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>laptop above cellphone left</ref>" }, { "from": "gpt", "value": "<ref>laptop above cellphone left</ref><box>[4, 323, 541, 612]</box>" } ]
7
coco_flip/train2014/COCO_train2014_000000581629.jpg
332
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>bald man</ref>" }, { "from": "gpt", "value": "<ref>bald man</ref><box>[4, 373, 305, 987]</box>" } ]
8
coco_flip/train2014/COCO_train2014_000000581481.jpg
514
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>what are these foods thing on the left</ref>" }, { "from": "gpt", "value": "<ref>what are these foods thing on the left</ref><box>[47, 49, 410, 988]</box>" } ]
9
coco_flip/train2014/COCO_train2014_000000581419.jpg
612
612
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>scale bottom left</ref>" }, { "from": "gpt", "value": "<ref>scale bottom left</ref><box>[19, 398, 365, 737]</box>" } ]
10
coco_flip/train2014/COCO_train2014_000000581199.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>plaid</ref>" }, { "from": "gpt", "value": "<ref>plaid</ref><box>[170, 208, 418, 678]</box>" } ]
11
coco_flip/train2014/COCO_train2014_000000581198.jpg
425
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>head of person woman looking at bottom</ref>" }, { "from": "gpt", "value": "<ref>head of person woman looking at bottom</ref><box>[408, 724, 664, 998]</box>" } ]
12
coco_flip/train2014/COCO_train2014_000000581198.jpg
425
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>blurred head right of the woman</ref>" }, { "from": "gpt", "value": "<ref>blurred head right of the woman</ref><box>[408, 724, 664, 998]</box>" } ]
13
coco_flip/train2014/COCO_train2014_000000581136.jpg
495
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>striped chair by the piano</ref>" }, { "from": "gpt", "value": "<ref>striped chair by the piano</ref><box>[0, 435, 205, 751]</box>" } ]
14
coco_flip/train2014/COCO_train2014_000000581136.jpg
495
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left most chair kinda dark</ref>" }, { "from": "gpt", "value": "<ref>left most chair kinda dark</ref><box>[0, 435, 205, 751]</box>" } ]
15
coco_flip/train2014/COCO_train2014_000000581136.jpg
495
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>sofa on left cut off</ref>" }, { "from": "gpt", "value": "<ref>sofa on left cut off</ref><box>[3, 812, 255, 998]</box>" } ]
16
coco_flip/train2014/COCO_train2014_000000581136.jpg
495
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>near chair</ref>" }, { "from": "gpt", "value": "<ref>near chair</ref><box>[298, 538, 735, 987]</box>" } ]
17
coco_flip/train2014/COCO_train2014_000000581057.jpg
426
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left bike in between white shirt and black shirt</ref>" }, { "from": "gpt", "value": "<ref>left bike in between white shirt and black shirt</ref><box>[202, 16, 426, 450]</box>" } ]
18
coco_flip/train2014/COCO_train2014_000000581057.jpg
426
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left top white shirt</ref>" }, { "from": "gpt", "value": "<ref>left top white shirt</ref><box>[0, 0, 316, 818]</box>" } ]
19
coco_flip/train2014/COCO_train2014_000000580913.jpg
427
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>food towards left edge</ref>" }, { "from": "gpt", "value": "<ref>food towards left edge</ref><box>[0, 555, 274, 985]</box>" } ]
20
coco_flip/train2014/COCO_train2014_000000580851.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>brush second from right</ref>" }, { "from": "gpt", "value": "<ref>brush second from right</ref><box>[597, 93, 954, 663]</box>" } ]
21
coco_flip/train2014/COCO_train2014_000000580849.jpg
429
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>first boy from the right</ref>" }, { "from": "gpt", "value": "<ref>first boy from the right</ref><box>[705, 180, 878, 651]</box>" } ]
22
coco_flip/train2014/COCO_train2014_000000580849.jpg
429
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>right man</ref>" }, { "from": "gpt", "value": "<ref>right man</ref><box>[705, 180, 878, 651]</box>" } ]
23
coco_flip/train2014/COCO_train2014_000000580741.jpg
368
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>couch the two are sitting on</ref>" }, { "from": "gpt", "value": "<ref>couch the two are sitting on</ref><box>[74, 383, 416, 814]</box>" } ]
24
coco_flip/train2014/COCO_train2014_000000580718.jpg
428
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>ummthe food in the bottom left corner i wouldnt eat it</ref>" }, { "from": "gpt", "value": "<ref>ummthe food in the bottom left corner i wouldnt eat it</ref><box>[0, 549, 280, 993]</box>" } ]
25
coco_flip/train2014/COCO_train2014_000000580718.jpg
428
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>lower left hand corner of pic the piece of upside down broccoli</ref>" }, { "from": "gpt", "value": "<ref>lower left hand corner of pic the piece of upside down broccoli</ref><box>[0, 549, 280, 993]</box>" } ]
26
coco_flip/train2014/COCO_train2014_000000580609.jpg
640
427
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>right guy in white</ref>" }, { "from": "gpt", "value": "<ref>right guy in white</ref><box>[628, 2, 997, 241]</box>" } ]
27
coco_flip/train2014/COCO_train2014_000000580609.jpg
640
427
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>man above green shirt guy</ref>" }, { "from": "gpt", "value": "<ref>man above green shirt guy</ref><box>[628, 2, 997, 241]</box>" } ]
28
coco_flip/train2014/COCO_train2014_000000580600.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>donut on top right white</ref>" }, { "from": "gpt", "value": "<ref>donut on top right white</ref><box>[588, 8, 845, 249]</box>" } ]
29
coco_flip/train2014/COCO_train2014_000000580600.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left donut sprinkles white icing</ref>" }, { "from": "gpt", "value": "<ref>left donut sprinkles white icing</ref><box>[217, 297, 611, 696]</box>" } ]
30
coco_flip/train2014/COCO_train2014_000000580579.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>center donut with nuts on top</ref>" }, { "from": "gpt", "value": "<ref>center donut with nuts on top</ref><box>[383, 389, 616, 621]</box>" } ]
31
coco_flip/train2014/COCO_train2014_000000580510.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>girrffe on left</ref>" }, { "from": "gpt", "value": "<ref>girrffe on left</ref><box>[18, 82, 378, 892]</box>" } ]
32
coco_flip/train2014/COCO_train2014_000000580505.jpg
469
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>woman on right in black</ref>" }, { "from": "gpt", "value": "<ref>woman on right in black</ref><box>[545, 240, 993, 997]</box>" } ]
33
coco_flip/train2014/COCO_train2014_000000580434.jpg
412
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>one in blue</ref>" }, { "from": "gpt", "value": "<ref>one in blue</ref><box>[581, 113, 816, 889]</box>" } ]
34
coco_flip/train2014/COCO_train2014_000000580434.jpg
412
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>person on right</ref>" }, { "from": "gpt", "value": "<ref>person on right</ref><box>[581, 113, 816, 889]</box>" } ]
35
coco_flip/train2014/COCO_train2014_000000580344.jpg
424
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>hand in top left corner</ref>" }, { "from": "gpt", "value": "<ref>hand in top left corner</ref><box>[0, 1, 230, 332]</box>" } ]
36
coco_flip/train2014/COCO_train2014_000000580296.jpg
502
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>bear on the left</ref>" }, { "from": "gpt", "value": "<ref>bear on the left</ref><box>[187, 187, 551, 888]</box>" } ]
37
coco_flip/train2014/COCO_train2014_000000580296.jpg
502
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>right animal</ref>" }, { "from": "gpt", "value": "<ref>right animal</ref><box>[398, 340, 763, 757]</box>" } ]
38
coco_flip/train2014/COCO_train2014_000000580277.jpg
427
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left blackwoman</ref>" }, { "from": "gpt", "value": "<ref>left blackwoman</ref><box>[0, 127, 176, 989]</box>" } ]
39
coco_flip/train2014/COCO_train2014_000000580257.jpg
508
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>lady on right those pizza slices looked great</ref>" }, { "from": "gpt", "value": "<ref>lady on right those pizza slices looked great</ref><box>[374, 59, 760, 986]</box>" } ]
40
coco_flip/train2014/COCO_train2014_000000580120.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>kid white tee lower right</ref>" }, { "from": "gpt", "value": "<ref>kid white tee lower right</ref><box>[716, 605, 955, 985]</box>" } ]
41
coco_flip/train2014/COCO_train2014_000000580120.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>standing boy with glasses</ref>" }, { "from": "gpt", "value": "<ref>standing boy with glasses</ref><box>[624, 476, 774, 854]</box>" } ]
42
coco_flip/train2014/COCO_train2014_000000580120.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>buttonup shirt standing</ref>" }, { "from": "gpt", "value": "<ref>buttonup shirt standing</ref><box>[624, 476, 774, 854]</box>" } ]
43
coco_flip/train2014/COCO_train2014_000000580052.jpg
375
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>boy in blue</ref>" }, { "from": "gpt", "value": "<ref>boy in blue</ref><box>[417, 272, 914, 769]</box>" } ]
44
coco_flip/train2014/COCO_train2014_000000579997.jpg
426
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>blurry person to left of players elbow</ref>" }, { "from": "gpt", "value": "<ref>blurry person to left of players elbow</ref><box>[152, 512, 335, 913]</box>" } ]
45
coco_flip/train2014/COCO_train2014_000000579909.jpg
375
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>freezer on right with water dispenser</ref>" }, { "from": "gpt", "value": "<ref>freezer on right with water dispenser</ref><box>[512, 124, 901, 998]</box>" } ]
46
coco_flip/train2014/COCO_train2014_000000579787.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>farthest left cow 99</ref>" }, { "from": "gpt", "value": "<ref>farthest left cow 99</ref><box>[0, 210, 192, 801]</box>" } ]
47
coco_flip/train2014/COCO_train2014_000000579787.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>second cow from top middle</ref>" }, { "from": "gpt", "value": "<ref>second cow from top middle</ref><box>[192, 204, 767, 803]</box>" } ]
48
coco_flip/train2014/COCO_train2014_000000579787.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>bigger cows back</ref>" }, { "from": "gpt", "value": "<ref>bigger cows back</ref><box>[192, 204, 767, 803]</box>" } ]
49
coco_flip/train2014/COCO_train2014_000000579663.jpg
612
612
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>girl left in red</ref>" }, { "from": "gpt", "value": "<ref>girl left in red</ref><box>[6, 399, 246, 725]</box>" } ]
50
coco_flip/train2014/COCO_train2014_000000579663.jpg
612
612
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>boy on right in the middle with red shirt and blond hair</ref>" }, { "from": "gpt", "value": "<ref>boy on right in the middle with red shirt and blond hair</ref><box>[651, 251, 845, 566]</box>" } ]
51
coco_flip/train2014/COCO_train2014_000000579571.jpg
374
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left person</ref>" }, { "from": "gpt", "value": "<ref>left person</ref><box>[109, 331, 370, 986]</box>" } ]
52
coco_flip/train2014/COCO_train2014_000000579382.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>black motorcycle</ref>" }, { "from": "gpt", "value": "<ref>black motorcycle</ref><box>[21, 110, 473, 629]</box>" } ]
53
coco_flip/train2014/COCO_train2014_000000579329.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>table cloth hanging over edge</ref>" }, { "from": "gpt", "value": "<ref>table cloth hanging over edge</ref><box>[4, 561, 772, 998]</box>" } ]
54
coco_flip/train2014/COCO_train2014_000000579329.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left most table</ref>" }, { "from": "gpt", "value": "<ref>left most table</ref><box>[0, 481, 473, 986]</box>" } ]
55
coco_flip/train2014/COCO_train2014_000000579329.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>right table</ref>" }, { "from": "gpt", "value": "<ref>right table</ref><box>[210, 289, 999, 732]</box>" } ]
56
coco_flip/train2014/COCO_train2014_000000579215.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>front black seat</ref>" }, { "from": "gpt", "value": "<ref>front black seat</ref><box>[452, 703, 996, 987]</box>" } ]
57
coco_flip/train2014/COCO_train2014_000000579215.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>seat foreground of picture on right</ref>" }, { "from": "gpt", "value": "<ref>seat foreground of picture on right</ref><box>[452, 703, 996, 987]</box>" } ]
58
coco_flip/train2014/COCO_train2014_000000579215.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>empty seat left</ref>" }, { "from": "gpt", "value": "<ref>empty seat left</ref><box>[18, 400, 389, 983]</box>" } ]
59
coco_flip/train2014/COCO_train2014_000000579179.jpg
512
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>guy kicking ball</ref>" }, { "from": "gpt", "value": "<ref>guy kicking ball</ref><box>[254, 30, 514, 953]</box>" } ]
60
coco_flip/train2014/COCO_train2014_000000579156.jpg
375
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left most cow</ref>" }, { "from": "gpt", "value": "<ref>left most cow</ref><box>[0, 373, 195, 912]</box>" } ]
61
coco_flip/train2014/COCO_train2014_000000579156.jpg
375
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>far left cow</ref>" }, { "from": "gpt", "value": "<ref>far left cow</ref><box>[0, 373, 195, 912]</box>" } ]
62
coco_flip/train2014/COCO_train2014_000000579136.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>red sleeved</ref>" }, { "from": "gpt", "value": "<ref>red sleeved</ref><box>[522, 209, 751, 862]</box>" } ]
63
coco_flip/train2014/COCO_train2014_000000578924.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>blue left of arm</ref>" }, { "from": "gpt", "value": "<ref>blue left of arm</ref><box>[0, 434, 257, 994]</box>" } ]
64
coco_flip/train2014/COCO_train2014_000000578924.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>blue blanket to the left of the kids arm</ref>" }, { "from": "gpt", "value": "<ref>blue blanket to the left of the kids arm</ref><box>[0, 434, 257, 994]</box>" } ]
65
coco_flip/train2014/COCO_train2014_000000578884.jpg
640
463
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>back right red bus</ref>" }, { "from": "gpt", "value": "<ref>back right red bus</ref><box>[469, 391, 751, 695]</box>" } ]
66
coco_flip/train2014/COCO_train2014_000000578841.jpg
427
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>girl with pink hat and white gloves</ref>" }, { "from": "gpt", "value": "<ref>girl with pink hat and white gloves</ref><box>[313, 400, 402, 979]</box>" } ]
67
coco_flip/train2014/COCO_train2014_000000578766.jpg
640
595
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>catcher in background left</ref>" }, { "from": "gpt", "value": "<ref>catcher in background left</ref><box>[89, 3, 264, 428]</box>" } ]
68
coco_flip/train2014/COCO_train2014_000000578766.jpg
640
595
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>guy top left of kid</ref>" }, { "from": "gpt", "value": "<ref>guy top left of kid</ref><box>[89, 3, 264, 428]</box>" } ]
69
coco_flip/train2014/COCO_train2014_000000578766.jpg
640
595
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>guy with glove top left behind kids left shoulder</ref>" }, { "from": "gpt", "value": "<ref>guy with glove top left behind kids left shoulder</ref><box>[89, 3, 264, 428]</box>" } ]
70
coco_flip/train2014/COCO_train2014_000000578766.jpg
640
595
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>umpire</ref>" }, { "from": "gpt", "value": "<ref>umpire</ref><box>[419, 0, 621, 495]</box>" } ]
71
coco_flip/train2014/COCO_train2014_000000578652.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>middle cluster of bananas at the top</ref>" }, { "from": "gpt", "value": "<ref>middle cluster of bananas at the top</ref><box>[354, 37, 695, 357]</box>" } ]
72
coco_flip/train2014/COCO_train2014_000000578652.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>top small bunch of bananas</ref>" }, { "from": "gpt", "value": "<ref>top small bunch of bananas</ref><box>[354, 37, 695, 357]</box>" } ]
73
coco_flip/train2014/COCO_train2014_000000578626.jpg
640
480
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>girls head sitting up</ref>" }, { "from": "gpt", "value": "<ref>girls head sitting up</ref><box>[4, 489, 273, 800]</box>" } ]
74
coco_flip/train2014/COCO_train2014_000000578521.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>man at center</ref>" }, { "from": "gpt", "value": "<ref>man at center</ref><box>[362, 108, 647, 801]</box>" } ]
75
coco_flip/train2014/COCO_train2014_000000578513.jpg
427
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>number 19</ref>" }, { "from": "gpt", "value": "<ref>number 19</ref><box>[487, 249, 786, 953]</box>" } ]
76
coco_flip/train2014/COCO_train2014_000000578459.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>light blue shirt under womans hand</ref>" }, { "from": "gpt", "value": "<ref>light blue shirt under womans hand</ref><box>[565, 500, 726, 980]</box>" } ]
77
coco_flip/train2014/COCO_train2014_000000578459.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>woman in back in black tank top</ref>" }, { "from": "gpt", "value": "<ref>woman in back in black tank top</ref><box>[547, 0, 718, 404]</box>" } ]
78
coco_flip/train2014/COCO_train2014_000000578331.jpg
557
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left woman</ref>" }, { "from": "gpt", "value": "<ref>left woman</ref><box>[0, 351, 438, 982]</box>" } ]
79
coco_flip/train2014/COCO_train2014_000000578326.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>man next to oven</ref>" }, { "from": "gpt", "value": "<ref>man next to oven</ref><box>[339, 421, 755, 982]</box>" } ]
80
coco_flip/train2014/COCO_train2014_000000578326.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>girl</ref>" }, { "from": "gpt", "value": "<ref>girl</ref><box>[236, 9, 825, 986]</box>" } ]
81
coco_flip/train2014/COCO_train2014_000000578154.jpg
422
600
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>lamb right</ref>" }, { "from": "gpt", "value": "<ref>lamb right</ref><box>[530, 286, 876, 919]</box>" } ]
82
coco_flip/train2014/COCO_train2014_000000578108.jpg
426
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>gray cat</ref>" }, { "from": "gpt", "value": "<ref>gray cat</ref><box>[315, 271, 792, 516]</box>" } ]
83
coco_flip/train2014/COCO_train2014_000000578063.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>triangle food on the left</ref>" }, { "from": "gpt", "value": "<ref>triangle food on the left</ref><box>[250, 381, 466, 708]</box>" } ]
84
coco_flip/train2014/COCO_train2014_000000578046.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>left piece</ref>" }, { "from": "gpt", "value": "<ref>left piece</ref><box>[0, 256, 441, 720]</box>" } ]
85
coco_flip/train2014/COCO_train2014_000000578037.jpg
427
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>man in tank top skate boarding</ref>" }, { "from": "gpt", "value": "<ref>man in tank top skate boarding</ref><box>[462, 142, 618, 895]</box>" } ]
86
coco_flip/train2014/COCO_train2014_000000578009.jpg
448
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>suitcase on the bottom center with three items in it</ref>" }, { "from": "gpt", "value": "<ref>suitcase on the bottom center with three items in it</ref><box>[377, 577, 686, 998]</box>" } ]
87
coco_flip/train2014/COCO_train2014_000000578009.jpg
448
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>very</ref>" }, { "from": "gpt", "value": "<ref>very</ref><box>[377, 577, 686, 998]</box>" } ]
88
coco_flip/train2014/COCO_train2014_000000577948.jpg
429
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>broc laying sideways next to the chicken fun game have a good one</ref>" }, { "from": "gpt", "value": "<ref>broc laying sideways next to the chicken fun game have a good one</ref><box>[0, 477, 432, 693]</box>" } ]
89
coco_flip/train2014/COCO_train2014_000000577830.jpg
480
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>right woman</ref>" }, { "from": "gpt", "value": "<ref>right woman</ref><box>[565, 468, 919, 985]</box>" } ]
90
coco_flip/train2014/COCO_train2014_000000577808.jpg
612
612
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>top slicd</ref>" }, { "from": "gpt", "value": "<ref>top slicd</ref><box>[250, 78, 959, 527]</box>" } ]
91
coco_flip/train2014/COCO_train2014_000000577657.jpg
640
536
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>third glass from back</ref>" }, { "from": "gpt", "value": "<ref>third glass from back</ref><box>[129, 79, 483, 701]</box>" } ]
92
coco_flip/train2014/COCO_train2014_000000577657.jpg
640
536
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>class in middle top</ref>" }, { "from": "gpt", "value": "<ref>class in middle top</ref><box>[298, 95, 657, 765]</box>" } ]
93
coco_flip/train2014/COCO_train2014_000000577657.jpg
640
536
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>third in from right look at the stems of the cup</ref>" }, { "from": "gpt", "value": "<ref>third in from right look at the stems of the cup</ref><box>[459, 77, 818, 904]</box>" } ]
94
coco_flip/train2014/COCO_train2014_000000577657.jpg
640
536
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>second glass from front</ref>" }, { "from": "gpt", "value": "<ref>second glass from front</ref><box>[466, 128, 911, 986]</box>" } ]
95
coco_flip/train2014/COCO_train2014_000000577657.jpg
640
536
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>second from right glass</ref>" }, { "from": "gpt", "value": "<ref>second from right glass</ref><box>[466, 128, 911, 986]</box>" } ]
96
coco_flip/train2014/COCO_train2014_000000577455.jpg
429
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>far left bird</ref>" }, { "from": "gpt", "value": "<ref>far left bird</ref><box>[0, 125, 257, 683]</box>" } ]
97
coco_flip/train2014/COCO_train2014_000000577447.jpg
375
500
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>orange in bowl on right sticking over side</ref>" }, { "from": "gpt", "value": "<ref>orange in bowl on right sticking over side</ref><box>[760, 201, 999, 510]</box>" } ]
98
coco_flip/train2014/COCO_train2014_000000577362.jpg
428
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>horse on right</ref>" }, { "from": "gpt", "value": "<ref>horse on right</ref><box>[543, 359, 726, 972]</box>" } ]
99
coco_flip/train2014/COCO_train2014_000000577362.jpg
428
640
[ { "from": "human", "value": "<image>\nPlease provide the bounding box coordinate of the region this sentence describes: <ref>horse on left</ref>" }, { "from": "gpt", "value": "<ref>horse on left</ref><box>[192, 540, 369, 936]</box>" } ]
End of preview.

No dataset card yet

Downloads last month
27