Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
query-id
stringclasses
100 values
corpus-id
stringlengths
4
9
score
float64
0
1
q_15406
c_3171787
0
q_15406
c_3171786
0
q_15406
c_3171788
0
q_15406
c_3171785
1
q_15406
c_3171789
0
q_15406
c_3172798
0
q_15406
c_3171790
0
q_15406
c_3172721
0
q_15406
c_3171056
0
q_15406
c_3172723
0
q_15406
c_3170534
0
q_15406
c_3172799
0
q_15406
c_3171057
0
q_15406
c_3177550
0
q_15406
c_3169696
0
q_15406
c_3181735
0
q_15406
c_3171059
0
q_15406
c_3169638
0
q_15406
c_3171060
0
q_15406
c_3172076
0
q_15406
c_3173089
0
q_15406
c_3172218
0
q_15406
c_3169695
0
q_15406
c_3168908
0
q_15406
c_3171058
0
q_15406
c_3167966
0
q_15406
c_3169598
0
q_15406
c_3167970
0
q_15406
c_3173090
0
q_15406
c_3177144
0
q_15406
c_3175318
0
q_15406
c_3172722
0
q_15406
c_3178224
0
q_15406
c_3167969
0
q_15406
c_3173088
0
q_15406
c_3174569
0
q_15406
c_3178223
0
q_15406
c_3169704
0
q_15406
c_3170533
0
q_15406
c_3167965
0
q_15406
c_3169692
0
q_15406
c_3177145
0
q_15406
c_3169693
0
q_15406
c_3177422
0
q_15406
c_3172011
0
q_15406
c_3176483
0
q_15406
c_3169694
0
q_15406
c_3181736
0
q_15406
c_3173546
0
q_15406
c_3167968
0
q_1239
c_257352
0
q_1239
c_257900
0
q_1239
c_257353
0
q_1239
c_257899
0
q_1239
c_249422
0
q_1239
c_254974
0
q_1239
c_249423
0
q_1239
c_254304
0
q_1239
c_249421
0
q_1239
c_254996
0
q_1239
c_260483
0
q_1239
c_253826
0
q_1239
c_249739
0
q_1239
c_254303
0
q_1239
c_257793
0
q_1239
c_253828
0
q_1239
c_265395
0
q_1239
c_257867
0
q_1239
c_268891
0
q_1239
c_253839
0
q_1239
c_262934
0
q_1239
c_257233
0
q_1239
c_265588
0
q_1239
c_257083
0
q_1239
c_260482
0
q_1239
c_253518
0
q_1239
c_268892
0
q_1239
c_253827
0
q_1239
c_248933
0
q_1239
c_254975
0
q_1239
c_253353
0
q_1239
c_257294
0
q_1239
c_248932
0
q_1239
c_257293
0
q_1239
c_265397
0
q_1239
c_253838
0
q_1239
c_255480
0
q_1239
c_257295
0
q_1239
c_260244
0
q_1239
c_253519
0
q_1239
c_265069
0
q_1239
c_257277
0
q_1239
c_257354
0
q_1239
c_248737
0
q_1239
c_265396
0
q_1239
c_250371
0
q_1239
c_265587
0
q_1239
c_257868
0
q_1239
c_253354
0
q_1239
c_257898
0
End of preview. Expand in Data Studio

BIRCO-Relic

An MTEB dataset
Massive Text Embedding Benchmark

Retrieval task using the RELIC dataset from BIRCO. This dataset contains 100 queries which are excerpts from literary analyses with a missing quotation (indicated by [masked sentence(s)]). Each query has a candidate pool of 50 passages. The objective is to retrieve the passage that best completes the literary analysis.

Task category t2t
Domains Fiction
Reference https://github.com/BIRCO-benchmark/BIRCO

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["BIRCO-Relic"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@misc{wang2024bircobenchmarkinformationretrieval,
  archiveprefix = {arXiv},
  author = {Xiaoyue Wang and Jianyou Wang and Weili Cao and Kaicheng Wang and Ramamohan Paturi and Leon Bergen},
  eprint = {2402.14151},
  primaryclass = {cs.IR},
  title = {BIRCO: A Benchmark of Information Retrieval Tasks with Complex Objectives},
  url = {https://arxiv.org/abs/2402.14151},
  year = {2024},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("BIRCO-Relic")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 5123,
        "number_of_characters": 2504348,
        "num_documents": 5023,
        "min_document_length": 13,
        "average_document_length": 478.342823014135,
        "max_document_length": 2579,
        "unique_documents": 5023,
        "num_queries": 100,
        "min_query_length": 588,
        "average_query_length": 1016.32,
        "max_query_length": 1472,
        "unique_queries": 100,
        "none_queries": 0,
        "num_relevant_docs": 5062,
        "min_relevant_docs_per_query": 50,
        "average_relevant_docs_per_query": 1.0,
        "max_relevant_docs_per_query": 51,
        "unique_relevant_docs": 5023,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
39