TUT Sound Events 2018 - Ambisonic, Reverberant and Synthetic Impulse Response Dataset

Dataset

Description

Tampere University of Technology (TUT) Sound Events 2018 - Ambisonic, Reverberant and Synthetic Impulse Response Dataset This dataset consists of simulated reverberant first order Ambisonic (FOA) format recordings with stationary point sources each associated with a spatial coordinate. The dataset consists of three sub-datasets with a) maximum one temporally overlapping sound events, b) maximum two temporally overlapping sound events, and c) maximum three temporally overlapping sound events. Each of the sub-datasets has three cross-validation splits, that consists of 240 recordings of about 30 seconds long for training split and 60 recordings of the same length for the testing split. For each recording, the metadata file with the same name consists of the sound event name, the temporal onset and offset time (in seconds), spatial location in azimuth and elevation angles (in degrees), and distance from the microphone (in meters). The sound events are spatially placed within a room using the image source method. The room size chosen was 10x8x4 meter with reverberation time per octave band of [1.0, 0.8, 0.7, 0.6, 0.5, 0.4] s and 125 Hz–4 kHz band center frequencies. The isolated sound events were taken from the DCASE 2016 task 2 dataset. This dataset consists of 11 sound event classes such as Clearing throat, Coughing, Door knock, Door slam, Drawer, Human laughter, Keyboard, Keys (put on a table), Page turning, Phone ringing and Speech. The sound events are randomly placed in a spatial grid with 10-degree resolution in full azimuth and [-60 60) degree elevation angles. Additionally, the sound events are placed at a random distance of at least 1 meter away from the microphone. The license of the dataset can be found in the LICENSE file. The rest of the nine zip files consists of datasets for a given split and overlap. For example, the ov3_split1.zip file consists of the audio and metadata folders for the case of maximum three temporally overlapping sound events (ov3) and the first cross-validation split (split1). Within each audio/metadata folder, the filenames for training split have the 'train' prefix, while the testing split filenames have the 'test' prefix. This dataset was collected as part of the 'Sound event localization and detection of overlapping sources using convolutional recurrent neural network' work.
Date made available30 Apr 2018
PublisherZenodo

Field of science, Statistics Finland

  • 113 Computer and information sciences

Cite this