Abstract
In this paper, we propose a method for video characterization based on activity description information. We employ a state-of-the-art video representation in order to learn human activity concepts, i.e., video groups formed by videos depicting similar human activities. In order to exploit the enriched visual information that is available in multi-view settings, we propose the use of the circular shift invariance property of the coefficients of the Discrete Fourier Transform (DFT) that leads to a view-independent multi-view action representation. In the test phase, in order to assign a test video to one (or multiple) activity groups, we perform temporal video segmentation in order to determine shorter videos depicting simple actions. Experimental results on 2 multi-view action databases denote the effectiveness of the proposed approach.
| Original language | English |
|---|---|
| Title of host publication | 8th International Conference on Electrical and Computer Engineering: Advancing Technology for a Better Tomorrow, ICECE 2014 |
| Publisher | IEEE |
| Pages | 266-269 |
| Number of pages | 4 |
| ISBN (Print) | 9781479941667 |
| DOIs | |
| Publication status | Published - 28 Jan 2015 |
| Publication type | A4 Article in conference proceedings |
| Event | 8th International Conference on Electrical and Computer Engineering, ICECE 2014 - Dhaka, Bangladesh Duration: 20 Dec 2014 → 22 Dec 2014 |
Conference
| Conference | 8th International Conference on Electrical and Computer Engineering, ICECE 2014 |
|---|---|
| Country/Territory | Bangladesh |
| City | Dhaka |
| Period | 20/12/14 → 22/12/14 |
Keywords
- Activity clustering
- Multi-camera setup
- Temporal video segmentation
- Video characterization
ASJC Scopus subject areas
- Control and Systems Engineering
- Electrical and Electronic Engineering