Abstract
This paper introduces a conceptually simple and effective Deep Audio-Visual Embedding for dynamic saliency prediction dubbed "DAVE" in conjunction with our efforts towards building an Audio-Visual Eye-tracking corpus named "AVE". Despite existing a strong relation between auditory and visual cues for guiding gaze during perception, video saliency models only consider visual cues and neglect the auditory information that is ubiquitous in dynamic scenes. Here, we propose a baseline deep audio-visual saliency model for multi-modal saliency prediction in the wild. Thus the proposed model is intentionally designed to be simple. A video baseline model is also developed on the same architecture to assess effectiveness of the audio-visual models on a fair basis. We demonstrate that audio-visual saliency model outperforms the video saliency models. The data and code are available at https://hrtavakoli.github.io/AVE/and https://github.com/hrtavakoli/DAVE.
Original language | English |
---|---|
Title of host publication | Proceedings ETRA 2020 Short Papers - ACM Symposium on Eye Tracking Research and Applications, ETRA 2020 |
Editors | Stephen N. Spencer |
Publisher | ACM |
ISBN (Electronic) | 9781450371346 |
DOIs | |
Publication status | Published - 6 Feb 2020 |
Publication type | A4 Article in conference proceedings |
Event | ACM Symposium on Eye Tracking Research and Applications - Stuttgart, Germany Duration: 2 Jun 2020 → 5 Jun 2020 |
Conference
Conference | ACM Symposium on Eye Tracking Research and Applications |
---|---|
Country/Territory | Germany |
City | Stuttgart |
Period | 2/06/20 → 5/06/20 |
Keywords
- Audio-Visual Saliency
- Deep Learning
- Dynamic Visual Attention
Publication forum classification
- Publication forum level 1
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition
- Human-Computer Interaction
- Ophthalmology
- Sensory Systems