TY - GEN
T1 - Towards Sonification in Multimodal and User-friendly Explainable Artificial Intelligence
AU - Schuller, Björn W.
AU - Virtanen, Tuomas
AU - Riveiro, Maria
AU - Rizos, Georgios
AU - Han, Jing
AU - Mesaros, Annamaria
AU - Drossos, Konstantinos
N1 - Funding Information:
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826506 (sustAGE).
Publisher Copyright:
© 2021 ACM.
PY - 2021/10/18
Y1 - 2021/10/18
N2 - We are largely used to hearing explanations. For example, if someone thinks you are sad today, they might reply to your "why?"with "because you were so Hmmmmm-mmm-mmm". Today's Artificial Intelligence (AI), however, is - if at all - largely providing explanations of decisions in a visual or textual manner. While such approaches are good for communication via visual media such as in research papers or screens of intelligent devices, they may not always be the best way to explain; especially when the end user is not an expert. In particular, when the AI's task is about Audio Intelligence, visual explanations appear less intuitive than audible, sonified ones. Sonification has also great potential for explainable AI (XAI) in systems that deal with non-audio data - for example, because it does not require visual contact or active attention of a user. Hence, sonified explanations of AI decisions face a challenging, yet highly promising and pioneering task. That involves incorporating innovative XAI algorithms to allow pointing back at the learning data responsible for decisions made by an AI, and to include decomposition of the data to identify salient aspects. It further aims to identify the components of the preprocessing, feature representation, and learnt attention patterns that are responsible for the decisions. Finally, it targets decision-making at the model-level, to provide a holistic explanation of the chain of processing in typical pattern recognition problems from end-to-end. Sonified AI explanations will need to unite methods for sonification of the identified aspects that benefit decisions, decomposition and recomposition of audio to sonify which parts in the audio were responsible for the decision, and rendering attention patterns and salient feature representations audible. Benchmarking sonified XAI is challenging, as it will require a comparison against a backdrop of existing, state-of-the-art visual and textual alternatives, as well as synergistic complementation of all modalities in user evaluations. Sonified AI explanations will need to target different user groups to allow personalisation of the sonification experience for different user needs, to lead to a major breakthrough in comprehensibility of AI via hearing how decisions are made, hence supporting tomorrow's humane AI's trustability. Here, we introduce and motivate the general idea, and provide accompanying considerations including milestones of realisation of sonifed XAI and foreseeable risks.
AB - We are largely used to hearing explanations. For example, if someone thinks you are sad today, they might reply to your "why?"with "because you were so Hmmmmm-mmm-mmm". Today's Artificial Intelligence (AI), however, is - if at all - largely providing explanations of decisions in a visual or textual manner. While such approaches are good for communication via visual media such as in research papers or screens of intelligent devices, they may not always be the best way to explain; especially when the end user is not an expert. In particular, when the AI's task is about Audio Intelligence, visual explanations appear less intuitive than audible, sonified ones. Sonification has also great potential for explainable AI (XAI) in systems that deal with non-audio data - for example, because it does not require visual contact or active attention of a user. Hence, sonified explanations of AI decisions face a challenging, yet highly promising and pioneering task. That involves incorporating innovative XAI algorithms to allow pointing back at the learning data responsible for decisions made by an AI, and to include decomposition of the data to identify salient aspects. It further aims to identify the components of the preprocessing, feature representation, and learnt attention patterns that are responsible for the decisions. Finally, it targets decision-making at the model-level, to provide a holistic explanation of the chain of processing in typical pattern recognition problems from end-to-end. Sonified AI explanations will need to unite methods for sonification of the identified aspects that benefit decisions, decomposition and recomposition of audio to sonify which parts in the audio were responsible for the decision, and rendering attention patterns and salient feature representations audible. Benchmarking sonified XAI is challenging, as it will require a comparison against a backdrop of existing, state-of-the-art visual and textual alternatives, as well as synergistic complementation of all modalities in user evaluations. Sonified AI explanations will need to target different user groups to allow personalisation of the sonification experience for different user needs, to lead to a major breakthrough in comprehensibility of AI via hearing how decisions are made, hence supporting tomorrow's humane AI's trustability. Here, we introduce and motivate the general idea, and provide accompanying considerations including milestones of realisation of sonifed XAI and foreseeable risks.
KW - Explainable artificial intelligence
KW - human computer interaction
KW - multimodality
KW - sonification
KW - trustworthy artificial intelligence
U2 - 10.1145/3462244.3479879
DO - 10.1145/3462244.3479879
M3 - Conference contribution
AN - SCOPUS:85118971526
T3 - ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction
SP - 788
EP - 792
BT - ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction
PB - ACM
T2 - ACM International Conference on Multimodal Interaction
Y2 - 18 October 2021 through 22 October 2021
ER -