39,186 research outputs found
Indoor Activity Detection and Recognition for Sport Games Analysis
Activity recognition in sport is an attractive field for computer vision
research. Game, player and team analysis are of great interest and research
topics within this field emerge with the goal of automated analysis. The very
specific underlying rules of sports can be used as prior knowledge for the
recognition task and present a constrained environment for evaluation. This
paper describes recognition of single player activities in sport with special
emphasis on volleyball. Starting from a per-frame player-centered activity
recognition, we incorporate geometry and contextual information via an activity
context descriptor that collects information about all player's activities over
a certain timespan relative to the investigated player. The benefit of this
context information on single player activity recognition is evaluated on our
new real-life dataset presenting a total amount of almost 36k annotated frames
containing 7 activity classes within 6 videos of professional volleyball games.
Our incorporation of the contextual information improves the average
player-centered classification performance of 77.56% by up to 18.35% on
specific classes, proving that spatio-temporal context is an important clue for
activity recognition.Comment: Part of the OAGM 2014 proceedings (arXiv:1404.3538
Encouraging LSTMs to Anticipate Actions Very Early
In contrast to the widely studied problem of recognizing an action given a
complete sequence, action anticipation aims to identify the action from only
partially available videos. As such, it is therefore key to the success of
computer vision applications requiring to react as early as possible, such as
autonomous navigation. In this paper, we propose a new action anticipation
method that achieves high prediction accuracy even in the presence of a very
small percentage of a video sequence. To this end, we develop a multi-stage
LSTM architecture that leverages context-aware and action-aware features, and
introduce a novel loss function that encourages the model to predict the
correct class as early as possible. Our experiments on standard benchmark
datasets evidence the benefits of our approach; We outperform the
state-of-the-art action anticipation methods for early prediction by a relative
increase in accuracy of 22.0% on JHMDB-21, 14.0% on UT-Interaction and 49.9% on
UCF-101.Comment: 13 Pages, 7 Figures, 11 Tables. Accepted in ICCV 2017. arXiv admin
note: text overlap with arXiv:1611.0552
Large-Scale Mapping of Human Activity using Geo-Tagged Videos
This paper is the first work to perform spatio-temporal mapping of human
activity using the visual content of geo-tagged videos. We utilize a recent
deep-learning based video analysis framework, termed hidden two-stream
networks, to recognize a range of activities in YouTube videos. This framework
is efficient and can run in real time or faster which is important for
recognizing events as they occur in streaming video or for reducing latency in
analyzing already captured video. This is, in turn, important for using video
in smart-city applications. We perform a series of experiments to show our
approach is able to accurately map activities both spatially and temporally. We
also demonstrate the advantages of using the visual content over the
tags/titles.Comment: Accepted at ACM SIGSPATIAL 201
- …