Abstract
Many common consumer devices use a short sound indication for declaring various modes of their functionality, such as the start and the end of their operation. This is likely to result in an intuitive auditory human-machine interaction, imputing a semantic content to the sounds used. In this paper we investigate sound patterns mapped to "Start" and "End" of operation manifestations and explore the possibility such semantics’ perception to be based either on users’ prior auditory training or on sound patterns that naturally convey appropriate information. To this aim, listening and machine learning tests were conducted. The obtained results indicate a strong relation between acoustic cues and semantics along with no need of prior knowledge for message conveyance.
| Original language | English |
|---|---|
| Title of host publication | Audio Engineering Society Convention 134 |
| Publisher | AES Audio Engineering Society |
| Number of pages | 9 |
| Publication status | Published - May 2013 |
| Publication type | B3 Article in conference proceedings |