867 research outputs found
Review of Person Re-identification Techniques
Person re-identification across different surveillance cameras with disjoint
fields of view has become one of the most interesting and challenging subjects
in the area of intelligent video surveillance. Although several methods have
been developed and proposed, certain limitations and unresolved issues remain.
In all of the existing re-identification approaches, feature vectors are
extracted from segmented still images or video frames. Different similarity or
dissimilarity measures have been applied to these vectors. Some methods have
used simple constant metrics, whereas others have utilised models to obtain
optimised metrics. Some have created models based on local colour or texture
information, and others have built models based on the gait of people. In
general, the main objective of all these approaches is to achieve a
higher-accuracy rate and lowercomputational costs. This study summarises
several developments in recent literature and discusses the various available
methods used in person re-identification. Specifically, their advantages and
disadvantages are mentioned and compared.Comment: Published 201
Class-Based Feature Matching Across Unrestricted Transformations
We develop a novel method for class-based feature matching across large changes in viewing conditions. The method is based on the property that when objects share a similar part, the similarity is preserved across viewing conditions. Given a feature and a training set of object images, we first identify the subset of objects that share this feature. The transformation of the feature's appearance across viewing conditions is determined mainly by properties of the feature, rather than of the object in which it is embedded. Therefore, the transformed feature will be shared by approximately the same set of objects. Based on this consistency requirement, corresponding features can be reliably identified from a set of candidate matches. Unlike previous approaches, the proposed scheme compares feature appearances only in similar viewing conditions, rather than across different viewing conditions. As a result, the scheme is not restricted to locally planar objects or affine transformations. The approach also does not require examples of correct matches. We show that by using the proposed method, a dense set of accurate correspondences can be obtained. Experimental comparisons demonstrate that matching accuracy is significantly improved over previous schemes. Finally, we show that the scheme can be successfully used for invariant object recognition
A Robust Online Method for Face Recognition under Illumination Invariant Conditions
In case of incremental inputs to an online face recognition with illumination invariant face samples which maximize the class-separation criterion but also incorporates the asymmetrical property of training data distributions In this paper we alleviate this problem with an incremental learning algorithm to effectively adjust a boosted strong classifier with domain-partitioning weak hypotheses to online samples which adopts a novel approach to efficient estimation of training losses received from offline samples An illumination invariant face representation is obtained by extracting local binary pattern LBP features NIR images The Ada-boost procedure is used to learn a powerful face recognition engine based on the invariant representation We use Incremental linear discriminant analysis ILDA in case of sparse function for active near infrared NIR imaging system that is able to produce face images of good condition regardless of visible lights in the environment accuracy by changes in environmental illumination The experiments show convincing results of our incremental method on challenging face detection in extreme illumination
Multi-View Object Instance Recognition in an Industrial Context
We present a fast object recognition system coding shape by viewpoint invariant geometric relations and appearance information. In our advanced industrial work-cell, the system can observe the work space of the robot by three pairs of Kinect and stereo cameras allowing for reliable and complete object information. From these sensors, we derive global viewpoint invariant shape features and robust color features making use of color normalization techniques.
We show that in such a set-up, our system can achieve high performance already with a very low number of training samples, which is crucial for user acceptance and that the use of multiple views is crucial for performance. This indicates that our approach can be used in controlled but realistic industrial contexts that requireâbesides high reliabilityâfast processing and an intuitive and easy use at the end-user side.European UnionDanish Council for Strategic Researc
Unconstrained Face Recognition
Although face recognition has been actively studied over the past
decade, the state-of-the-art recognition systems yield
satisfactory performance only under controlled scenarios and
recognition accuracy degrades significantly when confronted with
unconstrained situations due to variations such as illumintion,
pose, etc. In this dissertation, we propose novel approaches that
are able to recognize human faces under unconstrained situations.
Part I presents algorithms for face recognition under
illumination/pose variations. For face recognition across
illuminations, we present a generalized photometric stereo
approach by modeling all face appearances belonging to all humans
under all lighting conditions. Using a linear generalization, we
achieve a factorization of the observation matrix consisting of
face appearances of different individuals, each under a different
illumination. We resolve ambiguities in factorization using
surface integrability and symmetry constraints. In addition, an
illumination-invariant identity descriptor is provided to perform
face recognition across illuminations. We further extend the
generalized photometric stereo approach to an illuminating light
field approach, which is able to recognize faces under pose and
illumination variations.
Face appearance lies in a high-dimensional nonlinear manifold. In
Part II, we introduce machine learning approaches based on
reproducing kernel Hilbert space (RKHS) to capture higher-order
statistical characteristics of the nonlinear appearance manifold.
In particular, we analyze principal components of the RKHS in a
probabilistic manner and compute distances such as the Chernoff
distance, the Kullback-Leibler divergence between two Gaussian
densities in RKHS.
Part III is on face tracking and recognition from video. We first
present an enhanced tracking algorithm that models online
appearance changes in a video sequence using a mixture model and
produces good tracking results in various challenging scenarios.
For video-based face recognition, while conventional approaches
treat tracking and recognition separately, we present a
simultaneous tracking-and-recognition approach. This simultaneous
approach solved using the sequential importance sampling
algorithm improves accuracy in both tracking and recognition.
Finally, we propose a unifying framework called probabilistic
identity characterization able to perform face recognition under
registration/illumination/pose variation and from a still image,
a group of still images, or a video sequence
Exploiting Multiple Detections for Person Re-Identification
Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness transfer functions (CWBTFs) to model these appearance variations. Different from recently proposed methods which only consider pairs of images to learn a brightness transfer function, we exploit such a multiple-frame-based learning approach that leverages consecutive detections of each individual to transfer the appearance. We first present a CWBTF framework for the task of transforming appearance from one camera to another. We then present a re-identification framework where we segment the pedestrian images into meaningful parts and extract features from such parts, as well as from the whole body. Jointly, both of these frameworks contribute to model the appearance variations more robustly. We tested our approach on standard multi-camera surveillance datasets, showing consistent and significant improvements over existing methods on three different datasets without any other additional cost. Our approach is general and can be applied to any appearance-based metho
- âŠ