653 research outputs found
Efficient Human Activity Recognition in Large Image and Video Databases
Vision-based human action recognition has attracted considerable interest in recent research for its applications to video surveillance, content-based search, healthcare, and interactive games. Most existing research deals with building informative feature descriptors, designing efficient and robust algorithms, proposing versatile and challenging datasets, and fusing multiple modalities. Often, these approaches build on certain conventions such as the use of motion cues to determine video descriptors, application of off-the-shelf classifiers, and single-factor classification of videos. In this thesis, we deal with important but overlooked issues such as efficiency, simplicity, and scalability of human activity recognition in different application scenarios: controlled video environment (e.g.~indoor surveillance), unconstrained videos (e.g.~YouTube), depth or skeletal data (e.g.~captured by Kinect), and person images (e.g.~Flicker). In particular, we are interested in answering questions like (a) is it possible to efficiently recognize human actions in controlled videos without temporal cues? (b) given that the large-scale unconstrained video data are often of high dimension low sample size (HDLSS) nature, how to efficiently recognize human actions in such data? (c) considering the rich 3D motion information available from depth or motion capture sensors, is it possible to recognize both the actions and the actors using only the motion dynamics of underlying activities? and (d) can motion information from monocular videos be used for automatically determining saliency regions for recognizing actions in still images
Automatic learning of gait signatures for people identification
This work targets people identification in video based on the way they walk
(i.e. gait). While classical methods typically derive gait signatures from
sequences of binary silhouettes, in this work we explore the use of
convolutional neural networks (CNN) for learning high-level descriptors from
low-level motion features (i.e. optical flow components). We carry out a
thorough experimental evaluation of the proposed CNN architecture on the
challenging TUM-GAID dataset. The experimental results indicate that using
spatio-temporal cuboids of optical flow as input data for CNN allows to obtain
state-of-the-art results on the gait task with an image resolution eight times
lower than the previously reported results (i.e. 80x60 pixels).Comment: Proof of concept paper. Technical report on the use of ConvNets (CNN)
for gait recognition. Data and code:
http://www.uco.es/~in1majim/research/cnngaitof.htm
- …