Abstract
Human and machine performance in acoustic scene classification is examined through a parallel experiment using TUT Acoustic Scenes 2016 dataset. The machine learning perspective is presented based on the systems submitted for the 2016 challenge on Detection and Classification of Acoustic Scenes and Events. The human performance, assessed through a listening experiment, was found to be significantly lower than machine performance. Test subjects exhibited different behavior throughout the experiment, leading to significant differences in performance between groups of subjects. An expert listener trained for the task obtained similar accuracy to the average of submitted systems, comparable also to previous studies of human abilities in recognizing everyday acoustic scenes.
Original language | English |
---|---|
Title of host publication | 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) |
Publisher | IEEE Computer Society |
Pages | 319–323 |
ISBN (Print) | 978-1-5386-1631-4 |
DOIs | |
Publication status | Published - 2017 |
Publication type | A4 Article in conference proceedings |
Event | IEEE Workshop on Applications of Signal Processing to Audio and Acoustics - Duration: 1 Jan 1900 → … |
Conference
Conference | IEEE Workshop on Applications of Signal Processing to Audio and Acoustics |
---|---|
Period | 1/01/00 → … |
Keywords
- acoustic scene classification
- machine learning
- human performance
- listening experiment
Publication forum classification
- Publication forum level 1