1,057 research outputs found
A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild"
Recently, technologies such as face detection, facial landmark localisation
and face recognition and verification have matured enough to provide effective
and efficient solutions for imagery captured under arbitrary conditions
(referred to as "in-the-wild"). This is partially attributed to the fact that
comprehensive "in-the-wild" benchmarks have been developed for face detection,
landmark localisation and recognition/verification. A very important technology
that has not been thoroughly evaluated yet is deformable face tracking
"in-the-wild". Until now, the performance has mainly been assessed
qualitatively by visually assessing the result of a deformable face tracking
technology on short videos. In this paper, we perform the first, to the best of
our knowledge, thorough evaluation of state-of-the-art deformable face tracking
pipelines using the recently introduced 300VW benchmark. We evaluate many
different architectures focusing mainly on the task of on-line deformable face
tracking. In particular, we compare the following general strategies: (a)
generic face detection plus generic facial landmark localisation, (b) generic
model free tracking plus generic facial landmark localisation, as well as (c)
hybrid approaches using state-of-the-art face detection, model free tracking
and facial landmark localisation technologies. Our evaluation reveals future
avenues for further research on the topic.Comment: E. Antonakos and P. Snape contributed equally and have joint second
authorshi
Human Motion Analysis for Efficient Action Recognition
Automatic understanding of human actions is at the core of several application domains, such as content-based indexing, human-computer interaction, surveillance, and sports video analysis. The recent advances in digital platforms and the exponential growth of video and image data have brought an urgent quest for intelligent frameworks to automatically analyze human motion and predict their corresponding action based on visual data and sensor signals. This thesis presents a collection of methods that targets human action recognition using different action modalities. The first method uses the appearance modality and classifies human actions based on heterogeneous global- and local-based features of scene and humanbody appearances. The second method harnesses 2D and 3D articulated human poses and analyizes the body motion using a discriminative combination of the parts’ velocities, locations, and correlations histograms for action recognition. The third method presents an optimal scheme for combining the probabilistic predictions from different action modalities by solving a constrained quadratic optimization problem. In addition to the action classification task, we present a study that compares the utility of different pose variants in motion analysis for human action recognition. In particular, we compare the recognition performance when 2D and 3D poses are used. Finally, we demonstrate the efficiency of our pose-based method for action recognition in spotting and segmenting motion gestures in real time from a continuous stream of an input video for the recognition of the Italian sign gesture language
Re-identification and semantic retrieval of pedestrians in video surveillance scenarios
Person re-identification consists of recognizing individuals across different sensors of a camera
network. Whereas clothing appearance cues are widely used, other modalities could
be exploited as additional information sources, like anthropometric measures and gait. In
this work we investigate whether the re-identification accuracy of clothing appearance descriptors
can be improved by fusing them with anthropometric measures extracted from
depth data, using RGB-Dsensors, in unconstrained settings. We also propose a dissimilaritybased
framework for building and fusing multi-modal descriptors of pedestrian images for
re-identification tasks, as an alternative to the widely used score-level fusion. The experimental
evaluation is carried out on two data sets including RGB-D data, one of which is a
novel, publicly available data set that we acquired using Kinect sensors.
In this dissertation we also consider a related task, named semantic retrieval of pedestrians
in video surveillance scenarios, which consists of searching images of individuals using
a textual description of clothing appearance as a query, given by a Boolean combination of
predefined attributes. This can be useful in applications like forensic video analysis, where
the query can be obtained froma eyewitness report. We propose a general method for implementing
semantic retrieval as an extension of a given re-identification system that uses any
multiple part-multiple component appearance descriptor. Additionally, we investigate on
deep learning techniques to improve both the accuracy of attribute detectors and generalization
capabilities. Finally, we experimentally evaluate our methods on several benchmark
datasets originally built for re-identification task
- …