50,679 research outputs found
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches based on
visual data recorded from a single video camera. We propose an organizing
framework which puts in evidence the evolution of the area, with techniques
moving from heavily constrained motion capture scenarios towards more
challenging, realistic, "in the wild" videos. The proposed organization is
based on the representation used as input for the recognition task, emphasizing
the hypothesis assumed and thus, the constraints imposed on the type of video
that each technique is able to address. Expliciting the hypothesis and
constraints makes the framework particularly useful to select a method, given
an application. Another advantage of the proposed organization is that it
allows categorizing newest approaches seamlessly with traditional ones, while
providing an insightful perspective of the evolution of the action recognition
task up to now. That perspective is the basis for the discussion in the end of
the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4
table
Descriptive temporal template features for visual motion recognition
In this paper, a human action recognition system is proposed. The system is based on new, descriptive `temporal template' features in order to achieve high-speed recognition in real-time, embedded applications. The limitations of the well known `Motion History Image' (MHI) temporal template are addressed and a new `Motion History Histogram' (MHH) feature is proposed to capture more motion information in the video. MHH not only provides rich motion information, but also remains computationally inexpensive. To further improve classification performance, we combine both MHI and MHH into a low dimensional feature vector which is processed by a support vector machine (SVM). Experimental results show that our new representation can achieve a significant improvement in the performance of human action recognition over existing comparable methods, which use 2D temporal template based representations
Non-sparse Linear Representations for Visual Tracking with Online Reservoir Metric Learning
Most sparse linear representation-based trackers need to solve a
computationally expensive L1-regularized optimization problem. To address this
problem, we propose a visual tracker based on non-sparse linear
representations, which admit an efficient closed-form solution without
sacrificing accuracy. Moreover, in order to capture the correlation information
between different feature dimensions, we learn a Mahalanobis distance metric in
an online fashion and incorporate the learned metric into the optimization
problem for obtaining the linear representation. We show that online metric
learning using proximity comparison significantly improves the robustness of
the tracking, especially on those sequences exhibiting drastic appearance
changes. Furthermore, in order to prevent the unbounded growth in the number of
training samples for the metric learning, we design a time-weighted reservoir
sampling method to maintain and update limited-sized foreground and background
sample buffers for balancing sample diversity and adaptability. Experimental
results on challenging videos demonstrate the effectiveness and robustness of
the proposed tracker.Comment: Appearing in IEEE Conf. Computer Vision and Pattern Recognition, 201
- ā¦