Activities per year
Abstract
Audio captioning is a novel field of multi-modal translation and it is the task of creating a textual description of the content of an audio signal (e.g. "people talking in a big room"). The creation of a dataset for this task requires a considerable amount of work, rendering the crowdsourcing a very attractive option. In this paper we present a three steps based framework for crowdsourcing an audio captioning dataset, based on concepts and practises followed for the creation of widely used image captioning and machine translations datasets. During the first step initial captions are gathered. A grammatically corrected and/or rephrased version of each initial caption is obtained in second step. Finally, the initial and edited captions are rated, keeping the top ones for the produced dataset. We objectively evaluate the impact of our framework during the process of creating an audio captioning dataset, in terms of diversity and amount of typographical errors in the obtained captions. The obtained results show that the resulting dataset has less typographical errors than the initial captions, and on average each sound in the produced dataset has captions with a Jaccard similarity of 0.24, roughly equivalent to two ten-word captions having in common four words with the same root, indicating that the captions are dissimilar while they still contain some of the same information.
Original language | English |
---|---|
Title of host publication | Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019) |
ISBN (Electronic) | 978-0-578-59596-2 |
Publication status | Published - 26 Oct 2019 |
Publication type | A4 Article in conference proceedings |
Event | Workshop on Detection and Classification of Acoustic Scenes and Events - New York, United States Duration: 25 Oct 2019 → 26 Oct 2019 |
Workshop
Workshop | Workshop on Detection and Classification of Acoustic Scenes and Events |
---|---|
Abbreviated title | DCASE |
Country/Territory | United States |
City | New York |
Period | 25/10/19 → 26/10/19 |
Keywords
- audio captioning
- captioning
- amt
- crowdsourcing
- Amazon Mechanical Turk
Publication forum classification
- No publication forum level
Fingerprint
Dive into the research topics of 'Crowdsourcing a Dataset of Audio Captions'. Together they form a unique fingerprint.Datasets
-
Clotho dataset
Drosos, K. (Creator), Lipping, S. (Creator) & Virtanen, T. (Creator), 26 May 2021
Dataset
-
Pre-trained weights for the baseline DNN system of DCASE 2020 automated audio captioning task
Drossos, K. (Creator), Lipping, S. (Creator) & Virtanen, T. (Creator), 5 Mar 2020
DOI: 10.5281/zenodo.3697687, https://github.com/audio-captioning/dcase-2020-baseline
Dataset
Activities
- 1 Supervisor of bachelor student
-
Multimodal audio dataset creation with crowdsourcing
Konstantinos Drosos (Examiner)
2019Activity: Evaluation, examination and supervision › Supervisor of bachelor student