Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

EEG Image Decode — Dataset and Checkpoints

This dataset accompanies the NeurIPS 2024 paper:

Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion
Dongyang Li · Chen Wei · Shiying Li · Jiachen Zou · Quanying Liu

arXiv NeurIPS 2024 GitHub

It packages the preprocessed EEG recordings, stimulus-image visual features, VAE latent codes, trained EEG embeddings, fine-tuned checkpoints, and generated images needed to reproduce both the image retrieval and image reconstruction experiments.

Dataset Summary

THINGS-EEG

EEG signals were recorded from 10 subjects while they passively viewed images from the THINGS database:

Split Concepts Images / concept Total images
Training 1,654 10 16,540
Test 200 1 200

Each training image was repeated 4 times per subject; each test image was repeated 80 times per subject. EEG was recorded at 1,000 Hz and downsampled to 250 Hz during preprocessing. Each preprocessed array has shape (n_conditions, n_repetitions, n_channels, n_timepoints).

THINGS-MEG

MEG signals were recorded while subjects viewed images from the same THINGS database:

Split Concepts Images / concept Total images
Training 1,654 12 19,848
Test 200 1 200

Each training image was repeated 1 time per subject; each test image was repeated 12 times per subject.

File & Directory Reference

Preprocessed_data_250Hz/

Preprocessed EEG data (250 Hz) for 10 subjects (sub-01sub-10). Each subject folder contains:

File Description
preprocessed_eeg_training.npy EEG trials for 16,540 training image conditions. Shape: (16540, n_rep, 63, 250) — conditions × repetitions × channels × timepoints
preprocessed_eeg_test.npy EEG trials for 200 test image conditions. Shape: (200, n_rep, 63, 250)

The condition index maps to images via a nested loop: outer loop over 1,654 alphabetically sorted concepts, inner loop over 10 images per concept. E.g., index 000001_aardvark/aardvark_01b.jpg, index 1000002_abacus/abacus_01b.jpg.

emb_eeg/

Per-subject EEG feature embeddings extracted with the ATMS encoder (pre-trained).
File naming: ATM_S_eeg_features_sub-{XX}.pt

These .pt files can be loaded directly with torch.load() and used as input to the diffusion prior or retrieval head without re-running EEG encoder training.

fintune_ckpts/

Fine-tuned model checkpoints from the Generation pipeline (ATMS encoder + Diffusion Prior). Useful for evaluation-only runs or to resume training.

preprocessed_MEG/

Preprocessed MEG recordings (same stimulus set, different modality). Enables cross-modal decoding experiments or MEG-specific benchmarks.

ViT-H-14_features_train.pt (74.5 MB)

CLIP ViT-H-14 image features for all 16,540 training images. Shape: (16540, feature_dim). Used as supervision targets in ATMS contrastive training and as the diffusion prior's target space.

ViT-H-14_features_test.pt (1.64 MB)

CLIP ViT-H-14 image features for all 200 test images. Shape: (200, feature_dim). Used to compute retrieval accuracy (200-way zero-shot).

train_image_latent_512.pt

SDXL-VAE latent codes for all 16,540 training images at resolution 512×512. Shape: (16540, 4, 64, 64). Used as regression targets in the low-level reconstruction branch.

test_image_latent_512.pt (13.1 MB)

SDXL-VAE latent codes for all 200 test images at resolution 512×512. Shape: (200, 4, 64, 64). Used for low-level reconstruction evaluation.

images_set.tar.gz — THINGS-EEG2 Stimulus Images

Compressed archive of the full THINGS-EEG2 stimulus image set, sourced from the THINGS database.

tar -xzf images_set.tar.gz

preprocessed_MEG/images_set.tar.gz — THINGS-MEG Stimulus Images

Compressed archive of the stimulus image set used in the THINGS-MEG experiment, also sourced from the THINGS database but with a different train/test split than the EEG version. Extract with:

tar -xzf preprocessed_MEG/images_set.tar.gz

generated_imgs.tar.gz (8.13 GB)

Reconstructed images produced by the full Generation pipeline (high-level, low-level, and mixed). Organized per subject and per method for direct metric computation without re-running inference.

Quick Start

import numpy as np
import torch

# Load preprocessed EEG for subject 1
eeg_train = np.load("Preprocessed_data_250Hz/sub-01/preprocessed_eeg_training.npy")
eeg_test  = np.load("Preprocessed_data_250Hz/sub-01/preprocessed_eeg_test.npy")
print(eeg_train.shape)  # (16540, n_rep, 63, 250)

# Load ViT-H-14 image features
img_feats_test = torch.load("ViT-H-14_features_test.pt")   # (200, D)
img_feats_train = torch.load("ViT-H-14_features_train.pt") # (16540, D)

# Load VAE latents
latents_test = torch.load("test_image_latent_512.pt")       # (200, 4, 64, 64)

Citations

If you use this dataset, please cite our relevant papers:

@inproceedings{li2024visual,
  author    = {Li, Dongyang and Wei, Chen and Li, Shiying and Zou, Jiachen and Liu, Quanying},
  title     = {Visual Decoding and Reconstruction via {EEG} Embeddings with Guided Diffusion},
  booktitle = {Advances in Neural Information Processing Systems},
  volume    = {37},
  pages     = {102822--102864},
  year      = {2024},
  url       = {https://proceedings.neurips.cc/paper_files/paper/2024/file/ba5f1233efa77787ff9ec015877dbd1f-Paper-Conference.pdf}
}
@inproceedings{li2025brainflora,
  title={BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings},
  author={Li, Dongyang and Qin, Haoyang and Wu, Mingyang and Wei, Chen and Liu, Quanying},
  booktitle={Proceedings of the 33rd ACM International Conference on Multimedia},
  pages={5577--5586},
  year={2025}
}
@inproceedings{li2026mindpilot,
  title={MindPilot: Closed-loop Visual Stimulation Optimization for Brain Modulation with {EEG}-guided Diffusion},
  author={Dongyang Li and Kunpeng Xie and Mingyang Wu and Yiwei Kong and Jiahua Tang and Haoyang Qin and Chen Wei and Quanying Liu},
  booktitle={The Fourteenth International Conference on Learning Representations},
  year={2026},
  url={https://openreview.net/forum?id=7jdmXx869Q}

Related Links

Attribution & Third-Party Licenses

This dataset incorporates data from three upstream sources. Their respective licenses and attribution requirements are listed below.

1. THINGS-EEG2 — EEG Recordings

The EEG recordings (Preprocessed_data_250Hz/, emb_eeg/) are derived from the THINGS-EEG2 dataset:

  • Gifford, A.T., Dwivedi, K., Roig, G., & Cichy, R.M. EEG recordings to 22,248 images from the THINGS database. OSF, 2022.
  • https://osf.io/3jk45/
  • License: CC-BY 4.0

Original authors: Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy

Changes made: EEG data were downsampled to 250 Hz, epoched, baseline-corrected, and optionally whitened (ZCA). Per-subject arrays were exported to .npy / .pt format for use with PyTorch-based pipelines.

2. THINGS-MEG — MEG Recordings

The MEG recordings (preprocessed_MEG/) originate from the THINGS-MEG dataset published on OpenNeuro:

CC0 means the original authors have waived all copyright and related rights. No attribution is legally required, but we acknowledge their contribution here.

3. THINGS Image Database — Stimulus Images

The stimulus images (images_set/) are from the THINGS object concept database:

  • Hebart, M.N., Dickter, A.H., Kidder, A., Kwok, W.Y., Corriveau, A., Van Wicklin, C., & Baker, C.I. THINGS: A database of 1,854 object concepts and more than 26,000 naturalistic object images. PLOS ONE, 2019.
  • License: Other (see OSF project page for full terms)

The THINGS images are made available for non-commercial academic research. Please review the full terms on the OSF project page before using them.

License of This Dataset

This dataset as a whole is released under the Creative Commons Attribution 4.0 International (CC-BY 4.0) license, which is the most restrictive of the upstream licenses (CC-BY 4.0 from THINGS-EEG2). The THINGS-MEG component (CC0) is compatible with this. For the THINGS image component, please additionally respect the THINGS database's own usage terms.

You are free to:

  • Share — copy and redistribute the material in any medium or format
  • Adapt — remix, transform, and build upon the material for any purpose, even commercially

Under the following terms:

  • Attribution — You must give appropriate credit to the original authors listed above, provide links to the original datasets, and indicate if changes were made.

Full license text: https://creativecommons.org/licenses/by/4.0/legalcode

Downloads last month
1,063

Paper for LidongYang/EEG_Image_decode