Datasets:
Update task categories to `video-text-to-text`
Browse filesThis PR updates the `task_categories` metadata field to `video-text-to-text`. This change more accurately reflects the dataset's purpose as described in the paper abstract, which involves understanding "gesture-speech-text associations" using a "tri-modal video-gesture-speech-text representation". The existing categories `feature-extraction` and `text-to-video` are replaced as they do not fully capture the multimodal understanding nature of the dataset's primary tasks like gesture word spotting and cross-modal retrieval.
README.md
CHANGED
|
@@ -8,8 +8,7 @@ size_categories:
|
|
| 8 |
source_datasets:
|
| 9 |
- extended
|
| 10 |
task_categories:
|
| 11 |
-
-
|
| 12 |
-
- text-to-video
|
| 13 |
pretty_name: AVS-Spot
|
| 14 |
tags:
|
| 15 |
- co-speech gestures
|
|
@@ -18,7 +17,7 @@ tags:
|
|
| 18 |
- multimodal-learning
|
| 19 |
---
|
| 20 |
|
| 21 |
-
# Dataset
|
| 22 |
|
| 23 |
|
| 24 |
This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
|
|
@@ -33,9 +32,9 @@ This dataset is associated with the paper: "Understanding Co-Speech Gestures in-
|
|
| 33 |
|
| 34 |
We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udio and **L**anguage. Our semantic gesture representations can be used to perform multiple downstream tasks such as cross-modal retrieval, spotting gestured words, and identifying who is speaking solely using gestures.
|
| 35 |
|
| 36 |
-
## π Table
|
| 37 |
|
| 38 |
-
- [Dataset
|
| 39 |
- [π Table of Contents](#π-table-of-contents)
|
| 40 |
- [π What is the AVS-Spot Benchmark?](#π-what-is-the-AVS-Spot-Benchmark?)
|
| 41 |
- [Summary](#summary)
|
|
@@ -50,7 +49,6 @@ We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udi
|
|
| 50 |
|
| 51 |
|
| 52 |
|
| 53 |
-
|
| 54 |
## π What is the AVS-Spot Benchmark?
|
| 55 |
|
| 56 |
### Summary
|
|
|
|
| 8 |
source_datasets:
|
| 9 |
- extended
|
| 10 |
task_categories:
|
| 11 |
+
- video-text-to-text
|
|
|
|
| 12 |
pretty_name: AVS-Spot
|
| 13 |
tags:
|
| 14 |
- co-speech gestures
|
|
|
|
| 17 |
- multimodal-learning
|
| 18 |
---
|
| 19 |
|
| 20 |
+
# Dataset Card for AVS-Spot Benchmark
|
| 21 |
|
| 22 |
|
| 23 |
This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
|
|
|
|
| 32 |
|
| 33 |
We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udio and **L**anguage. Our semantic gesture representations can be used to perform multiple downstream tasks such as cross-modal retrieval, spotting gestured words, and identifying who is speaking solely using gestures.
|
| 34 |
|
| 35 |
+
## π Table of Contents
|
| 36 |
|
| 37 |
+
- [Dataset Card for AVS-Spot Benchmark](#dataset-card-for-avs-spot-benchmark)
|
| 38 |
- [π Table of Contents](#π-table-of-contents)
|
| 39 |
- [π What is the AVS-Spot Benchmark?](#π-what-is-the-AVS-Spot-Benchmark?)
|
| 40 |
- [Summary](#summary)
|
|
|
|
| 49 |
|
| 50 |
|
| 51 |
|
|
|
|
| 52 |
## π What is the AVS-Spot Benchmark?
|
| 53 |
|
| 54 |
### Summary
|