View-invariant action recognition based on Artificial Neural Networks

Alexandros Iosifidis, Anastasios Tefas, Ioannis Pitas

    Research output: Contribution to journalArticleScientificpeer-review

    101 Citations (Scopus)

    Abstract

    In this paper, a novel view invariant action recognition method based on neural network representation and recognition is proposed. The novel representation of action videos is based on learning spatially related human body posture
    prototypes using Self Organizing Maps (SOM). Fuzzy distances from human body posture prototypes are used to produce a time invariant action representation. Multilayer perceptrons are used for action classification. The algorithm is trained using data from a multi-camera setup. An arbitrary number of cameras can be used in order to recognize actions using a Bayesian framework. The proposed method can also be applied to videos depicting interactions between humans, without any modification. The use of information captured from different viewing angles leads to high classification performance. The proposed method is the first one that has been tested in challenging experimental setups, a fact that denotes its effectiveness to deal with most of the open issues in action recognition.
    Original languageEnglish
    Pages (from-to)412-424
    Number of pages13
    JournalIEEE Transactions on Neural Networks and Learning Systems
    Volume23
    Issue number3
    DOIs
    Publication statusPublished - 2012
    Publication typeA1 Journal article-refereed

    Fingerprint

    Dive into the research topics of 'View-invariant action recognition based on Artificial Neural Networks'. Together they form a unique fingerprint.

    Cite this