Training Sound Event Detection with Soft Labels from Crowdsourced Annotations

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

3 Downloads (Pure)

Abstract

In this paper, we study the use of soft labels to train a system for sound event detection (SED). Soft labels can result from annotations which account for human uncertainty about categories, or emerge as a natural representation of multiple opinions in annotation. Converting annotations to hard labels results in unambiguous categories for training, at the cost of losing the details about the labels distribution. This work investigates how soft labels can be used, and what benefits they bring in training a SED system. The results show that the system is capable of learning information about the activity of the sounds which is reflected in the soft labels and is able to detect sounds that are missed in the typical binary target training setup. We also release a new dataset produced through crowdsourcing, containing temporally strong labels for sound events in real-life recordings, with both soft and hard labels.
Original languageEnglish
Title of host publication ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
ISBN (Electronic)978-1-7281-6327-7
DOIs
Publication statusPublished - 2023
Publication typeA4 Article in conference proceedings
EventIEEE International Conference on Acoustics, Speech, and Signal Processing - Rhodes Island, Greece
Duration: 4 Jun 202310 Jun 2023

Publication series

NameProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
ISSN (Electronic)2379-190X

Conference

ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing
Country/TerritoryGreece
CityRhodes Island
Period4/06/2310/06/23

Publication forum classification

  • Publication forum level 2

Fingerprint

Dive into the research topics of 'Training Sound Event Detection with Soft Labels from Crowdsourced Annotations'. Together they form a unique fingerprint.

Cite this