View-independent human action recognition based on multi-view action images and discriminant learning

Alexandros Iosifidis, Anastasios Tefas, Ioannis Pitas

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    4 Citations (Scopus)

    Abstract

    In this paper a novel view-independent human action recognition method is proposed. A multi-camera setup is used to capture the human body from different viewing angles. Actions are described by a novel action representation, the so-called multi-view action image (MVAI), which effectively addresses the camera viewpoint identification problem, i.e., the identification of the position of each camera with respect to the person's body. Linear Discriminant Analysis is applied on the MVAIs in order to to map actions to a discriminant feature space where actions are classified by using a simple nearest class centroid classification scheme. Experimental results denote the effectiveness of the proposed action recognition approach.

    Original languageEnglish
    Title of host publication2013 IEEE 11th IVMSP Workshop: 3D Image/Video Technologies and Applications, IVMSP 2013 - Proceedings
    DOIs
    Publication statusPublished - 2013
    Publication typeA4 Article in conference proceedings
    Event2013 IEEE 11th Workshop on 3D Image/Video Technologies and Applications, IVMSP 2013 - Seoul, Korea, Republic of
    Duration: 10 Jun 201312 Jun 2013

    Conference

    Conference2013 IEEE 11th Workshop on 3D Image/Video Technologies and Applications, IVMSP 2013
    Country/TerritoryKorea, Republic of
    CitySeoul
    Period10/06/1312/06/13

    Keywords

    • Discriminant Learning
    • Human Action Recognition
    • Multi-camera Setup
    • Multi-view Action Images

    ASJC Scopus subject areas

    • Computer Graphics and Computer-Aided Design
    • Computer Vision and Pattern Recognition
    • Computer Science Applications

    Fingerprint

    Dive into the research topics of 'View-independent human action recognition based on multi-view action images and discriminant learning'. Together they form a unique fingerprint.

    Cite this