nielsr HF Staff commited on
Commit
7384c54
Β·
verified Β·
1 Parent(s): 2ef6aab

Update task categories to `video-text-to-text`

Browse files

This PR updates the `task_categories` metadata field to `video-text-to-text`. This change more accurately reflects the dataset's purpose as described in the paper abstract, which involves understanding "gesture-speech-text associations" using a "tri-modal video-gesture-speech-text representation". The existing categories `feature-extraction` and `text-to-video` are replaced as they do not fully capture the multimodal understanding nature of the dataset's primary tasks like gesture word spotting and cross-modal retrieval.

Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -8,8 +8,7 @@ size_categories:
8
  source_datasets:
9
  - extended
10
  task_categories:
11
- - feature-extraction
12
- - text-to-video
13
  pretty_name: AVS-Spot
14
  tags:
15
  - co-speech gestures
@@ -18,7 +17,7 @@ tags:
18
  - multimodal-learning
19
  ---
20
 
21
- # Dataset Card for AVS-Spot Benchmark
22
 
23
 
24
  This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
@@ -33,9 +32,9 @@ This dataset is associated with the paper: "Understanding Co-Speech Gestures in-
33
 
34
  We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udio and **L**anguage. Our semantic gesture representations can be used to perform multiple downstream tasks such as cross-modal retrieval, spotting gestured words, and identifying who is speaking solely using gestures.
35
 
36
- ## πŸ“‹ Table of Contents
37
 
38
- - [Dataset Card for AVS-Spot Benchmark](#dataset-card-for-avs-spot-benchmark)
39
  - [πŸ“‹ Table of Contents](#πŸ“‹-table-of-contents)
40
  - [πŸ“š What is the AVS-Spot Benchmark?](#πŸ“š-what-is-the-AVS-Spot-Benchmark?)
41
  - [Summary](#summary)
@@ -50,7 +49,6 @@ We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udi
50
 
51
 
52
 
53
-
54
  ## πŸ“š What is the AVS-Spot Benchmark?
55
 
56
  ### Summary
 
8
  source_datasets:
9
  - extended
10
  task_categories:
11
+ - video-text-to-text
 
12
  pretty_name: AVS-Spot
13
  tags:
14
  - co-speech gestures
 
17
  - multimodal-learning
18
  ---
19
 
20
+ # Dataset Card for AVS-Spot Benchmark
21
 
22
 
23
  This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
 
32
 
33
  We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udio and **L**anguage. Our semantic gesture representations can be used to perform multiple downstream tasks such as cross-modal retrieval, spotting gestured words, and identifying who is speaking solely using gestures.
34
 
35
+ ## πŸ“‹ Table of Contents
36
 
37
+ - [Dataset Card for AVS-Spot Benchmark](#dataset-card-for-avs-spot-benchmark)
38
  - [πŸ“‹ Table of Contents](#πŸ“‹-table-of-contents)
39
  - [πŸ“š What is the AVS-Spot Benchmark?](#πŸ“š-what-is-the-AVS-Spot-Benchmark?)
40
  - [Summary](#summary)
 
49
 
50
 
51
 
 
52
  ## πŸ“š What is the AVS-Spot Benchmark?
53
 
54
  ### Summary