1 research outputs found
Human Action Recognition in Drone Videos using a Few Aerial Training Examples
Drones are enabling new forms of human actions surveillance due to their low
cost and fast mobility. However, using deep neural networks for automatic
aerial action recognition is difficult due to the need for a large number of
training aerial human action videos. Collecting a large number of human action
aerial videos is costly, time-consuming, and difficult. In this paper, we
explore two alternative data sources to improve aerial action classification
when only a few training aerial examples are available. As a first data source,
we resort to video games. We collect plenty of aerial game action videos using
two gaming engines. For the second data source, we leverage conditional
Wasserstein Generative Adversarial Networks to generate aerial features from
ground videos. Given that both data sources have some limitations, e.g. game
videos are biased towards specific actions categories (fighting, shooting,
etc.,), and it is not easy to generate good discriminative GAN-generated
features for all types of actions, we need to efficiently integrate two dataset
sources with few available real aerial training videos. To address this
challenge of the heterogeneous nature of the data, we propose to use a disjoint
multitask learning framework. We feed the network with real and game, or real
and GAN-generated data in an alternating fashion to obtain an improved action
classifier. We validate the proposed approach on two aerial action datasets and
demonstrate that features from aerial game videos and those generated from GAN
can be extremely useful for an improved action recognition in real aerial
videos when only a few real aerial training examples are available.Comment: CVIU, 202