7,481 research outputs found
Going Deeper into Action Recognition: A Survey
Understanding human actions in visual data is tied to advances in
complementary research areas including object recognition, human dynamics,
domain adaptation and semantic segmentation. Over the last decade, human action
analysis evolved from earlier schemes that are often limited to controlled
environments to nowadays advanced solutions that can learn from millions of
videos and apply to almost all daily activities. Given the broad range of
applications from video surveillance to human-computer interaction, scientific
milestones in action recognition are achieved more rapidly, eventually leading
to the demise of what used to be good in a short time. This motivated us to
provide a comprehensive review of the notable steps taken towards recognizing
human actions. To this end, we start our discussion with the pioneering methods
that use handcrafted representations, and then, navigate into the realm of deep
learning based approaches. We aim to remain objective throughout this survey,
touching upon encouraging improvements as well as inevitable fallbacks, in the
hope of raising fresh questions and motivating new research directions for the
reader
Subspace Representations and Learning for Visual Recognition
Pervasive and affordable sensor and storage technology enables the acquisition of an ever-rising amount of visual data. The ability to extract semantic information by interpreting, indexing and searching visual data is impacting domains such as surveillance, robotics, intelligence, human- computer interaction, navigation, healthcare, and several others. This further stimulates the investigation of automated extraction techniques that are more efficient, and robust against the many sources of noise affecting the already complex visual data, which is carrying the semantic information of interest. We address the problem by designing novel visual data representations, based on learning data subspace decompositions that are invariant against noise, while being informative for the task at hand. We use this guiding principle to tackle several visual recognition problems, including detection and recognition of human interactions from surveillance video, face recognition in unconstrained environments, and domain generalization for object recognition.;By interpreting visual data with a simple additive noise model, we consider the subspaces spanned by the model portion (model subspace) and the noise portion (variation subspace). We observe that decomposing the variation subspace against the model subspace gives rise to the so-called parity subspace. Decomposing the model subspace against the variation subspace instead gives rise to what we name invariant subspace. We extend the use of kernel techniques for the parity subspace. This enables modeling the highly non-linear temporal trajectories describing human behavior, and performing detection and recognition of human interactions. In addition, we introduce supervised low-rank matrix decomposition techniques for learning the invariant subspace for two other tasks. We learn invariant representations for face recognition from grossly corrupted images, and we learn object recognition classifiers that are invariant to the so-called domain bias.;Extensive experiments using the benchmark datasets publicly available for each of the three tasks, show that learning representations based on subspace decompositions invariant to the sources of noise lead to results comparable or better than the state-of-the-art
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Domain-Specific Face Synthesis for Video Face Recognition from a Single Sample Per Person
The performance of still-to-video FR systems can decline significantly
because faces captured in unconstrained operational domain (OD) over multiple
video cameras have a different underlying data distribution compared to faces
captured under controlled conditions in the enrollment domain (ED) with a still
camera. This is particularly true when individuals are enrolled to the system
using a single reference still. To improve the robustness of these systems, it
is possible to augment the reference set by generating synthetic faces based on
the original still. However, without knowledge of the OD, many synthetic images
must be generated to account for all possible capture conditions. FR systems
may, therefore, require complex implementations and yield lower accuracy when
training on many less relevant images. This paper introduces an algorithm for
domain-specific face synthesis (DSFS) that exploits the representative
intra-class variation information available from the OD. Prior to operation, a
compact set of faces from unknown persons appearing in the OD is selected
through clustering in the captured condition space. The domain-specific
variations of these face images are projected onto the reference stills by
integrating an image-based face relighting technique inside the 3D
reconstruction framework. A compact set of synthetic faces is generated that
resemble individuals of interest under the capture conditions relevant to the
OD. In a particular implementation based on sparse representation
classification, the synthetic faces generated with the DSFS are employed to
form a cross-domain dictionary that account for structured sparsity.
Experimental results reveal that augmenting the reference gallery set of FR
systems using the proposed DSFS approach can provide a higher level of accuracy
compared to state-of-the-art approaches, with only a moderate increase in its
computational complexity
- …