8 research outputs found
People detection and re-identification for multi surveillance cameras
International audienceRe-identifying people in a network of non overlapping cameras requires people to be accurately detected and tracked in order to build a strong visual signature of people appearances. Traditional surveillance cameras do not provide high enough image resolution to iris recognition algorithms. State of the art face recognition can not be easily applied to surveillance videos as people need to be facing the camera at a close range. The different lighting environment contained in each camera scene and the strong illumination variability occurring as people walk throughout a scene induce great variability in their appearance. %In addition, surveillance scene often display people whose images occlud each other onto the image plane making people detection difficult to achieve. In addition, people images occlud each other onto the image plane making people detection difficult to achieve. We propose a novel simplified Local Binary Pattern features to detect people, head and faces. A Mean Riemannian Covariance Grid (MRCG) is used to model appearance of tracked people to obtain highly discriminative human signature. The methods are evaluated and compared with the state of the art algorithms. We have created a new dataset from a network of 2 cameras showing the usefulness of our system to detect, track and re-identify people using appearance and face features
Human Distance Estimation Using Quadcopter For Surveillance Purpose
Nowadays, quadcopters are commonly used. Quadcopters are unmanned aerial vehicles with four propellers to provide lift to fly and hover above ground. Quadcopter nowadays is a very common commercial item in everyday life. Some quadcopters are designed to do 3D or 2D mapping of a certain area or to take videos or just for entertainment purposes. Quadcopter is a very versatile item and is able to change into anything for example a quadcopter can also be used for security purposes to decrease the crime rate of our country. The objective of this study is to design and develop a quadcopter with image processing system to have the ability to measure the distance of a human from the drone itself. The quadcopter is designed to be small in size and have a mini computer like Raspberry Pi on top of it to compute the algorithm to calculate the distance of the human by using image processing technique through the camera which is setup on the drone. Human detecting algorithm YOLO and software Open CV is chosen to detect human and calculate the distance from the quadcopter. The results show that the system is quite limited by the capabilities of the hardware. The
system shows an accuracy of more than 90 percent when the
human is standing within a certain range. Both the accuracy of
the distance sensing and human recognizing system is affected by
the limitation of the hardware
Review of Person Re-identification Techniques
Person re-identification across different surveillance cameras with disjoint
fields of view has become one of the most interesting and challenging subjects
in the area of intelligent video surveillance. Although several methods have
been developed and proposed, certain limitations and unresolved issues remain.
In all of the existing re-identification approaches, feature vectors are
extracted from segmented still images or video frames. Different similarity or
dissimilarity measures have been applied to these vectors. Some methods have
used simple constant metrics, whereas others have utilised models to obtain
optimised metrics. Some have created models based on local colour or texture
information, and others have built models based on the gait of people. In
general, the main objective of all these approaches is to achieve a
higher-accuracy rate and lowercomputational costs. This study summarises
several developments in recent literature and discusses the various available
methods used in person re-identification. Specifically, their advantages and
disadvantages are mentioned and compared.Comment: Published 201
Object detection, recognition and re-identification in video footage
There has been a significant number of security concerns in recent times; as a result, security cameras have been installed to monitor activities and to prevent crimes in most public places. These analysis are done either through video analytic or forensic analysis operations on human observations. To this end, within the research context of this thesis, a proactive machine vision based military recognition system has been developed to help monitor activities in the military environment. The proposed object detection, recognition and re-identification systems have been presented in this thesis.
A novel technique for military personnel recognition is presented in this thesis. Initially the detected camouflaged personnel are segmented using a grabcut segmentation algorithm. Since in general a camouflaged personnel's uniform appears to be similar both at the top and the bottom of the body, an image patch is initially extracted from the segmented foreground image and used as the region of interest. Subsequently the colour and texture features are extracted from each patch and used for classification. A second approach for personnel recognition is proposed through the recognition of the badge on the cap of a military person. A feature matching metric based on the extracted Speed Up Robust Features (SURF) from the badge on a personnel's cap enabled the recognition of the personnel's arm of service.
A state-of-the-art technique for recognising vehicle types irrespective of their view angle is also presented in this thesis. Vehicles are initially detected and segmented using a Gaussian Mixture Model (GMM) based foreground/background segmentation algorithm. A Canny Edge Detection (CED) stage, followed by morphological operations are used as pre-processing stage to help enhance foreground vehicular object detection and segmentation. Subsequently, Region, Histogram Oriented Gradient (HOG) and Local Binary Pattern (LBP) features are extracted from the refined foreground vehicle object and used as features for vehicle type recognition. Two different datasets with variant views of front/rear and angle are used and combined for testing the proposed technique.
For night-time video analytics and forensics, the thesis presents a novel approach to pedestrian detection and vehicle type recognition. A novel feature acquisition technique named, CENTROG, is proposed for pedestrian detection and vehicle type recognition in this thesis. Thermal images containing pedestrians and vehicular objects are used to analyse the performance of the proposed algorithms. The video is initially segmented using a GMM based foreground object segmentation algorithm. A CED based pre-processing step is used to enhance segmentation accuracy prior using Census Transforms for initial feature extraction. HOG features are then extracted from the Census transformed images and used for detection and recognition respectively of human and vehicular objects in thermal images.
Finally, a novel technique for people re-identification is proposed in this thesis based on using low-level colour features and mid-level attributes. The low-level colour histogram bin values were normalised to 0 and 1. A publicly available dataset (VIPeR) and a self constructed dataset have been used in the experiments conducted with 7 clothing attributes and low-level colour histogram features. These 7 attributes are detected using features extracted from 5 different regions of a detected human object using an SVM classifier. The low-level colour features were extracted from the regions of a detected human object. These 5 regions are obtained by human object segmentation and subsequent body part sub-division. People are re-identified by computing the Euclidean distance between a probe and the gallery image sets. The experiments conducted using SVM classifier and Euclidean distance has proven that the proposed techniques attained all of the aforementioned goals. The colour and texture features proposed for camouflage military personnel recognition surpasses the state-of-the-art methods. Similarly, experiments prove that combining features performed best when recognising vehicles in different views subsequent to initial training based on multi-views. In the same vein, the proposed CENTROG technique performed better than the state-of-the-art CENTRIST technique for both pedestrian detection and vehicle type recognition at night-time using thermal images. Finally, we show that the proposed 7 mid-level attributes and the low-level features results in improved performance accuracy for people re-identification
Resource-constrained re-identification in camera networks
PhDIn multi-camera surveillance, association of people detected in different camera views over
time, known as person re-identification, is a fundamental task. Re-identification is a challenging
problem because of changes in the appearance of people under varying camera conditions. Existing
approaches focus on improving the re-identification accuracy, while no specific effort has
yet been put into efficiently utilising the available resources that are normally limited in a camera
network, such as storage, computation and communication capabilities. In this thesis, we aim to
perform and improve the task of re-identification under constrained resources. More specifically,
we reduce the data needed to represent the appearance of an object through a proposed feature
selection method and a difference-vector representation method.
The proposed feature-selection method considers the computational cost of feature extraction
and the cost of storing the feature descriptor jointly with the feature’s re-identification performance
to select the most cost-effective and well-performing features. This selection allows us
to improve inter-camera re-identification while reducing storage and computation requirements
within each camera. The selected features are ranked in the order of effectiveness, which enable
a further reduction by dropping the least effective features when application constraints require
this conformity. We also reduce the communication overhead in the camera network by transferring
only a difference vector, obtained from the extracted features of an object and the reference
features within a camera, as an object representation for the association.
In order to reduce the number of possible matches per association, we group the objects appearing
within a defined time-interval in un-calibrated camera pairs. Such a grouping improves
the re-identification, since only those objects that appear within the same time-interval in a camera
pair are needed to be associated. For temporal alignment of cameras, we exploit differences
between the frame numbers of the detected objects in a camera pair. Finally, in contrast to
pairwise camera associations used in literature, we propose a many-to-one camera association
method for re-identification, where multiple cameras can be candidates for having generated the
previous detections of an object. We obtain camera-invariant matching scores from the scores
obtained using the pairwise re-identification approaches. These scores measure the chances of a
correct match between the objects detected in a group of cameras.
Experimental results on publicly available and in-lab multi-camera image and video datasets
show that the proposed methods successfully reduce storage, computation and communication
requirements while improving the re-identification rate compared to existing re-identification
approaches