Visual Voice Activity Detection based on Spatiotemporal Information and Bag of Words

Foteini Patrona, Alexandros Iosifidis, Anastasios Tefas, Ioannis Pitas

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    2 Citations (Scopus)

    Abstract

    A novel method for Visual Voice Activity Detection (V-VAD) that exploits local shape and motion information appearing at spatiotemporal locations of interest for facial region video description and the Bag of Words (BoW) model for facial region video representation is proposed in this paper. Facial region video classification is subsequently performed based on Single-hidden Layer Feedforward Neural (SLFN) network trained by applying the recently proposed kernel Extreme Learning Machine (kELM) algorithm on training facial videos depicting talking and non-talking persons. Experimental results on two publicly available V-VAD data sets, denote the effectiveness of the proposed method, since better generalization performance in unseen users is achieved, compared to recently proposed state-of-the-art methods.
    Original languageEnglish
    Title of host publicationIEEE International Conference on Image Processing
    Pages2334-2338
    DOIs
    Publication statusPublished - 2015
    Publication typeA4 Article in conference proceedings

    Fingerprint

    Dive into the research topics of 'Visual Voice Activity Detection based on Spatiotemporal Information and Bag of Words'. Together they form a unique fingerprint.

    Cite this