TY - GEN
T1 - Attention-Based Methods For Audio Question Answering
AU - Sudarsanam, Parthasaarathy
AU - Virtanen, Tuomas
N1 - Publisher Copyright:
© 2023 European Signal Processing Conference, EUSIPCO. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Audio question answering (AQA) is the task of producing natural language answers when a system is provided with audio and natural language questions. In this paper, we propose neural network architectures based on self-attention and cross-attention for the AQA task. The self-attention layers extract powerful audio and textual representations. The cross-attention maps audio features that are relevant to the textual features to produce answers. All our models are trained on the recently proposed Clotho-AQA dataset for both binary yes/no questions and single-word answer questions. Our results clearly show improvement over the reference method reported in the original paper. On the yes/no binary classification task, our proposed model achieves an accuracy of 68.3% compared to 62.7% in the reference model. For the single-word answers multiclass classifier, our model produces a top-1 and top-5 accuracy of 57.9% and 99.8% compared to 54.2% and 93.7% in the reference model respectively. We further discuss some of the challenges in the Clotho-AQA dataset such as the presence of the same answer word in multiple tenses, singular and plural forms, and the presence of specific and generic answers to the same question. We address these issues and present a revised version of the dataset.
AB - Audio question answering (AQA) is the task of producing natural language answers when a system is provided with audio and natural language questions. In this paper, we propose neural network architectures based on self-attention and cross-attention for the AQA task. The self-attention layers extract powerful audio and textual representations. The cross-attention maps audio features that are relevant to the textual features to produce answers. All our models are trained on the recently proposed Clotho-AQA dataset for both binary yes/no questions and single-word answer questions. Our results clearly show improvement over the reference method reported in the original paper. On the yes/no binary classification task, our proposed model achieves an accuracy of 68.3% compared to 62.7% in the reference model. For the single-word answers multiclass classifier, our model produces a top-1 and top-5 accuracy of 57.9% and 99.8% compared to 54.2% and 93.7% in the reference model respectively. We further discuss some of the challenges in the Clotho-AQA dataset such as the presence of the same answer word in multiple tenses, singular and plural forms, and the presence of specific and generic answers to the same question. We address these issues and present a revised version of the dataset.
KW - attention mechanism
KW - Audio question answering
KW - Clotho-AQA
U2 - 10.23919/EUSIPCO58844.2023.10289751
DO - 10.23919/EUSIPCO58844.2023.10289751
M3 - Conference contribution
AN - SCOPUS:85178379958
T3 - European Signal Processing Conference
SP - 750
EP - 754
BT - 31st European Signal Processing Conference, EUSIPCO 2023 - Proceedings
PB - European Signal Processing Conference, EUSIPCO
T2 - European Signal Processing Conference
Y2 - 4 September 2023 through 8 September 2023
ER -