Datasets:

Modalities:
Tabular
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:

Improve dataset card with description, links, and tags

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +14 -36
README.md CHANGED
@@ -1,44 +1,22 @@
1
  ---
2
- license: mit
 
 
 
 
3
  ---
4
 
 
5
 
 
6
 
 
7
 
8
- We use Stdio input/output format here. For example, for the task to calculate the sum of a list, the input and output are in the following format:
9
- ```python
10
- input = "5\n1 2 3 4 5\n"
11
- output = "15"
12
- ```
13
- CodeContests and CodeForces are using this format, however, MBPP and part of LiveCodeBench are using functional input/output format, such like
14
- ```python
15
- assert sum_function([1, 2, 3, 4, 5]) == 15
16
- ```
17
- In this project, we have converted the the functional format to the Stdio format to achieve consistency.
18
 
19
- [Paper](https://arxiv.org/abs/2506.03136) | [Code](https://github.com/Gen-Verse/CURE)
 
 
 
20
 
21
-
22
- # Citation
23
-
24
- ```
25
- @article{wang2025cure,
26
- title={Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning},
27
- author={Wang, Yinjie and Yang, Ling and Tian, Ye and Shen, Ke and Wang, Mengdi},
28
- journal={arXiv preprint arXiv:2506.03136},
29
- year={2025}
30
- }
31
-
32
-
33
- @article{li2022alphacode,
34
- author = {Yujia Li and David Choi and Junyoung Chung and Nate Kushman and Julian Schrittwieser and Rémi Leblond and Tom Eccles and James Keeling and Felix Gimeno and Agustin Dal Lago and Thomas Hubert and Peter Choy and Cyprien de Masson d’Autume and Igor Babuschkin and Xinyun Chen and Po-Sen Huang and Johannes Welbl and Sven Gowal and Alexey Cherepanov and James Molloy and Daniel J. Mankowitz and Esme Sutherland Robson and Pushmeet Kohli and Nando de Freitas and Koray Kavukcuoglu and Oriol Vinyals},
35
- title = {Competition-level code generation with AlphaCode},
36
- journal = {Science},
37
- volume = {378},
38
- number = {6624},
39
- pages = {1092--1097},
40
- year = {2022},
41
- doi = {10.1126/science.abq1158},
42
- url = {https://www.science.org/doi/10.1126/science.abq1158}
43
- }
44
- ```
 
1
  ---
2
+ tags:
3
+ - model_hub_mixin
4
+ - pytorch_model_hub_mixin
5
+ pipeline_tag: feature-extraction
6
+ library_name: pytorch
7
  ---
8
 
9
+ # FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens
10
 
11
+ The model was presented in the paper [FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens](https://arxiv.org/abs/2506.03096).
12
 
13
+ # Paper abstract
14
 
15
+ Contrastive language-image pre-training aligns the features of text-image pairs in a common latent space via distinct encoders for each modality. While this approach achieves impressive performance in several zero-shot tasks, it cannot natively handle multimodal inputs, i.e., encoding image and text into a single feature vector. As a remedy, it is common practice to use additional modules to merge the features extracted by the unimodal encoders. In this work, we present FuseLIP, an alternative architecture for multimodal embedding. Leveraging recent progress in discrete image tokenizers, we propose to use a single transformer model which operates on an extended vocabulary of text and image tokens. This early fusion approach allows the different modalities to interact at each depth of encoding and obtain richer representations compared to common late fusion. We collect new datasets for multimodal pre-training and evaluation, designing challenging tasks for multimodal encoder models. We show that FuseLIP outperforms other approaches in multimodal embedding tasks such as VQA and text-guided image transformation retrieval, while being comparable to baselines on unimodal tasks.
 
 
 
 
 
 
 
 
 
16
 
17
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
18
+ - Code: https://github.com/chs20/fuselip
19
+ - Paper: https://arxiv.org/abs/2506.03096
20
+ - Docs: https://github.com/chs20/fuselip
21
 
22
+ The model can be used for feature extraction.