Towards Sonification in Multimodal and User-friendly Explainable Artificial Intelligence

Björn W. Schuller, Tuomas Virtanen, Maria Riveiro, Georgios Rizos, Jing Han, Annamaria Mesaros, Konstantinos Drossos

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


We are largely used to hearing explanations. For example, if someone thinks you are sad today, they might reply to your "why?"with "because you were so Hmmmmm-mmm-mmm". Today's Artificial Intelligence (AI), however, is - if at all - largely providing explanations of decisions in a visual or textual manner. While such approaches are good for communication via visual media such as in research papers or screens of intelligent devices, they may not always be the best way to explain; especially when the end user is not an expert. In particular, when the AI's task is about Audio Intelligence, visual explanations appear less intuitive than audible, sonified ones. Sonification has also great potential for explainable AI (XAI) in systems that deal with non-audio data - for example, because it does not require visual contact or active attention of a user. Hence, sonified explanations of AI decisions face a challenging, yet highly promising and pioneering task. That involves incorporating innovative XAI algorithms to allow pointing back at the learning data responsible for decisions made by an AI, and to include decomposition of the data to identify salient aspects. It further aims to identify the components of the preprocessing, feature representation, and learnt attention patterns that are responsible for the decisions. Finally, it targets decision-making at the model-level, to provide a holistic explanation of the chain of processing in typical pattern recognition problems from end-to-end. Sonified AI explanations will need to unite methods for sonification of the identified aspects that benefit decisions, decomposition and recomposition of audio to sonify which parts in the audio were responsible for the decision, and rendering attention patterns and salient feature representations audible. Benchmarking sonified XAI is challenging, as it will require a comparison against a backdrop of existing, state-of-the-art visual and textual alternatives, as well as synergistic complementation of all modalities in user evaluations. Sonified AI explanations will need to target different user groups to allow personalisation of the sonification experience for different user needs, to lead to a major breakthrough in comprehensibility of AI via hearing how decisions are made, hence supporting tomorrow's humane AI's trustability. Here, we introduce and motivate the general idea, and provide accompanying considerations including milestones of realisation of sonifed XAI and foreseeable risks.

Original languageEnglish
Title of host publicationICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction
Number of pages5
ISBN (Electronic)9781450384810
Publication statusPublished - 18 Oct 2021
Publication typeA4 Article in conference proceedings
EventACM International Conference on Multimodal Interaction -
Duration: 18 Oct 202122 Oct 2021

Publication series

NameICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction


ConferenceACM International Conference on Multimodal Interaction


  • Explainable artificial intelligence
  • human computer interaction
  • multimodality
  • sonification
  • trustworthy artificial intelligence

Publication forum classification

  • Publication forum level 1

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture
  • Human-Computer Interaction


Dive into the research topics of 'Towards Sonification in Multimodal and User-friendly Explainable Artificial Intelligence'. Together they form a unique fingerprint.

Cite this