745 research outputs found
Efficient Action Detection in Untrimmed Videos via Multi-Task Learning
This paper studies the joint learning of action recognition and temporal
localization in long, untrimmed videos. We employ a multi-task learning
framework that performs the three highly related steps of action proposal,
action recognition, and action localization refinement in parallel instead of
the standard sequential pipeline that performs the steps in order. We develop a
novel temporal actionness regression module that estimates what proportion of a
clip contains action. We use it for temporal localization but it could have
other applications like video retrieval, surveillance, summarization, etc. We
also introduce random shear augmentation during training to simulate viewpoint
change. We evaluate our framework on three popular video benchmarks. Results
demonstrate that our joint model is efficient in terms of storage and
computation in that we do not need to compute and cache dense trajectory
features, and that it is several times faster than its sequential ConvNets
counterpart. Yet, despite being more efficient, it outperforms state-of-the-art
methods with respect to accuracy.Comment: WACV 2017 camera ready, minor updates about test time efficienc
Large-Scale Mapping of Human Activity using Geo-Tagged Videos
This paper is the first work to perform spatio-temporal mapping of human
activity using the visual content of geo-tagged videos. We utilize a recent
deep-learning based video analysis framework, termed hidden two-stream
networks, to recognize a range of activities in YouTube videos. This framework
is efficient and can run in real time or faster which is important for
recognizing events as they occur in streaming video or for reducing latency in
analyzing already captured video. This is, in turn, important for using video
in smart-city applications. We perform a series of experiments to show our
approach is able to accurately map activities both spatially and temporally. We
also demonstrate the advantages of using the visual content over the
tags/titles.Comment: Accepted at ACM SIGSPATIAL 201
- …