Unsupervised Audio-Caption Aligning Learns Correspondences between Individual Sound Events and Textual Phrases

Huang Xie, Okko Räsänen, Konstantinos Drossos, Tuomas Virtanen

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

13 Citations (Scopus)
15 Downloads (Pure)

Abstract

We investigate unsupervised learning of correspondences between sound events and textual phrases through aligning audio clips with textual captions describing the content of a whole audio clip. We align originally unaligned and unannotated audio clips and their captions by scoring the similarities between audio frames and words, as encoded by modality-specific encoders and using a ranking-loss criterion to optimize the model. After training, we obtain clip-caption similarity by averaging frame-word similarities and estimate event-phrase correspondences by calculating frame-phrase similarities. We evaluate the method with two cross-modal tasks: audio-caption retrieval, and phrase-based sound event detection (SED). Experimental results show that the proposed method can globally associate audio clips with captions as well as locally learn correspondences between individual sound events and textual phrases in an unsupervised manner.
Original languageEnglish
Title of host publicationICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
Pages8867-8871
Number of pages5
ISBN (Electronic)978-1-6654-0540-9
ISBN (Print)978-1-6654-0541-6
DOIs
Publication statusPublished - May 2022
Publication typeA4 Article in conference proceedings
EventIEEE International Conference on Acoustics, Speech and Signal Processing - Singapore, Singapore
Duration: 23 May 202227 May 2022

Publication series

NameProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
ISSN (Electronic)2379-190X

Conference

ConferenceIEEE International Conference on Acoustics, Speech and Signal Processing
Country/TerritorySingapore
CitySingapore
Period23/05/2227/05/22

Keywords

  • Cross-modal learning
  • audio
  • caption
  • sound event
  • unsupervised learning

Publication forum classification

  • Publication forum level 1

Fingerprint

Dive into the research topics of 'Unsupervised Audio-Caption Aligning Learns Correspondences between Individual Sound Events and Textual Phrases'. Together they form a unique fingerprint.

Cite this