11,688 research outputs found
Review of Person Re-identification Techniques
Person re-identification across different surveillance cameras with disjoint
fields of view has become one of the most interesting and challenging subjects
in the area of intelligent video surveillance. Although several methods have
been developed and proposed, certain limitations and unresolved issues remain.
In all of the existing re-identification approaches, feature vectors are
extracted from segmented still images or video frames. Different similarity or
dissimilarity measures have been applied to these vectors. Some methods have
used simple constant metrics, whereas others have utilised models to obtain
optimised metrics. Some have created models based on local colour or texture
information, and others have built models based on the gait of people. In
general, the main objective of all these approaches is to achieve a
higher-accuracy rate and lowercomputational costs. This study summarises
several developments in recent literature and discusses the various available
methods used in person re-identification. Specifically, their advantages and
disadvantages are mentioned and compared.Comment: Published 201
Light and Motion in SDSS Stripe 82: The Catalogues
We present a new public archive of light-motion curves in Sloan Digital Sky
Survey (SDSS) Stripe 82, covering 99 deg in right ascension from RA = 20.7 h to
3.3 h and spanning 2.52 deg in declination from Dec = -1.26 to 1.26 deg, for a
total sky area of ~249 sq deg. Stripe 82 has been repeatedly monitored in the
u, g, r, i and z bands over a seven-year baseline. Objects are cross-matched
between runs, taking into account the effects of any proper motion. The
resulting catalogue contains almost 4 million light-motion curves of stellar
objects and galaxies. The photometry are recalibrated to correct for varying
photometric zeropoints, achieving ~20 mmag and ~30 mmag root-mean-square (RMS)
accuracy down to 18 mag in the g, r, i and z bands for point sources and
extended sources, respectively. The astrometry are recalibrated to correct for
inherent systematic errors in the SDSS astrometric solutions, achieving ~32 mas
and ~35 mas RMS accuracy down to 18 mag for point sources and extended sources,
respectively.
For each light-motion curve, 229 photometric and astrometric quantities are
derived and stored in a higher-level catalogue. On the photometric side, these
include mean exponential and PSF magnitudes along with uncertainties, RMS
scatter, chi^2 per degree of freedom, various magnitude distribution
percentiles, object type (stellar or galaxy), and eclipse, Stetson and Vidrih
variability indices. On the astrometric side, these quantities include mean
positions, proper motions as well as their uncertainties and chi^2 per degree
of freedom. The here presented light-motion curve catalogue is complete down to
r~21.5 and is at present the deepest large-area photometric and astrometric
variability catalogue available.Comment: MNRAS accepte
Statistical Characterization of the Chandra Source Catalog
The first release of the Chandra Source Catalog (CSC) contains ~95,000 X-ray
sources in a total area of ~0.75% of the entire sky, using data from ~3,900
separate ACIS observations of a multitude of different types of X-ray sources.
In order to maximize the scientific benefit of such a large, heterogeneous
data-set, careful characterization of the statistical properties of the
catalog, i.e., completeness, sensitivity, false source rate, and accuracy of
source properties, is required. Characterization efforts of other, large
Chandra catalogs, such as the ChaMP Point Source Catalog (Kim et al. 2007) or
the 2 Mega-second Deep Field Surveys (Alexander et al. 2003), while
informative, cannot serve this purpose, since the CSC analysis procedures are
significantly different and the range of allowable data is much less
restrictive. We describe here the characterization process for the CSC. This
process includes both a comparison of real CSC results with those of other,
deeper Chandra catalogs of the same targets and extensive simulations of
blank-sky and point source populations.Comment: To be published in the Astrophysical Journal Supplement Series (Fig.
52 replaced with a version which astro-ph can convert to PDF without issues.
Spatial Pyramid Context-Aware Moving Object Detection and Tracking for Full Motion Video and Wide Aerial Motion Imagery
A robust and fast automatic moving object detection and tracking system is
essential to characterize target object and extract spatial and temporal
information for different functionalities including video surveillance systems,
urban traffic monitoring and navigation, robotic. In this dissertation, I
present a collaborative Spatial Pyramid Context-aware moving object detection
and Tracking system. The proposed visual tracker is composed of one master
tracker that usually relies on visual object features and two auxiliary
trackers based on object temporal motion information that will be called
dynamically to assist master tracker. SPCT utilizes image spatial context at
different level to make the video tracking system resistant to occlusion,
background noise and improve target localization accuracy and robustness. We
chose a pre-selected seven-channel complementary features including RGB color,
intensity and spatial pyramid of HoG to encode object color, shape and spatial
layout information. We exploit integral histogram as building block to meet the
demands of real-time performance. A novel fast algorithm is presented to
accurately evaluate spatially weighted local histograms in constant time
complexity using an extension of the integral histogram method. Different
techniques are explored to efficiently compute integral histogram on GPU
architecture and applied for fast spatio-temporal median computations and 3D
face reconstruction texturing. We proposed a multi-component framework based on
semantic fusion of motion information with projected building footprint map to
significantly reduce the false alarm rate in urban scenes with many tall
structures. The experiments on extensive VOTC2016 benchmark dataset and aerial
video confirm that combining complementary tracking cues in an intelligent
fusion framework enables persistent tracking for Full Motion Video and Wide
Aerial Motion Imagery.Comment: PhD Dissertation (162 pages
Person re-Identification over distributed spaces and time
PhDReplicating the human visual system and cognitive abilities that the brain uses to process the
information it receives is an area of substantial scientific interest. With the prevalence of video
surveillance cameras a portion of this scientific drive has been into providing useful automated
counterparts to human operators. A prominent task in visual surveillance is that of matching
people between disjoint camera views, or re-identification. This allows operators to locate people
of interest, to track people across cameras and can be used as a precursory step to multi-camera
activity analysis. However, due to the contrasting conditions between camera views and their
effects on the appearance of people re-identification is a non-trivial task. This thesis proposes
solutions for reducing the visual ambiguity in observations of people between camera views
This thesis first looks at a method for mitigating the effects on the appearance of people under
differing lighting conditions between camera views. This thesis builds on work modelling
inter-camera illumination based on known pairs of images. A Cumulative Brightness Transfer
Function (CBTF) is proposed to estimate the mapping of colour brightness values based on limited
training samples. Unlike previous methods that use a mean-based representation for a set of
training samples, the cumulative nature of the CBTF retains colour information from underrepresented
samples in the training set. Additionally, the bi-directionality of the mapping function
is explored to try and maximise re-identification accuracy by ensuring samples are accurately
mapped between cameras.
Secondly, an extension is proposed to the CBTF framework that addresses the issue of changing
lighting conditions within a single camera. As the CBTF requires manually labelled training
samples it is limited to static lighting conditions and is less effective if the lighting changes. This
Adaptive CBTF (A-CBTF) differs from previous approaches that either do not consider lighting
change over time, or rely on camera transition time information to update. By utilising contextual
information drawn from the background in each camera view, an estimation of the lighting
change within a single camera can be made. This background lighting model allows the mapping
of colour information back to the original training conditions and thus remove the need for
3
retraining.
Thirdly, a novel reformulation of re-identification as a ranking problem is proposed. Previous
methods use a score based on a direct distance measure of set features to form a correct/incorrect
match result. Rather than offering an operator a single outcome, the ranking paradigm is to give
the operator a ranked list of possible matches and allow them to make the final decision. By utilising
a Support Vector Machine (SVM) ranking method, a weighting on the appearance features
can be learned that capitalises on the fact that not all image features are equally important to
re-identification. Additionally, an Ensemble-RankSVM is proposed to address scalability issues
by separating the training samples into smaller subsets and boosting the trained models.
Finally, the thesis looks at a practical application of the ranking paradigm in a real world application.
The system encompasses both the re-identification stage and the precursory extraction
and tracking stages to form an aid for CCTV operators. Segmentation and detection are combined
to extract relevant information from the video, while several combinations of matching
techniques are combined with temporal priors to form a more comprehensive overall matching
criteria.
The effectiveness of the proposed approaches is tested on datasets obtained from a variety
of challenging environments including offices, apartment buildings, airports and outdoor public
spaces
- β¦