3,392 research outputs found
Target recognitions in multiple camera CCTV using colour constancy
People tracking using colour feature in crowded scene through CCTV network have been a popular and at the same time a very difficult topic in computer vision. It is mainly because of the difficulty for the acquisition of intrinsic signatures of targets from a single view of the scene. Many factors, such as variable illumination conditions and viewing angles, will induce illusive modification of intrinsic signatures of targets. The objective of this paper is to verify if colour constancy (CC) approach really helps people tracking in CCTV network system. We have testified a number of CC algorithms together with various colour descriptors, to assess the efficiencies of people recognitions from real multi-camera i-LIDS data set via Receiver Operating Characteristics (ROC). It is found that when CC is applied together with some form of colour restoration mechanisms such as colour transfer, the recognition performance can be improved by at least a factor of two. An elementary luminance based CC coupled with a pixel based colour transfer algorithm, together with experimental results are reported in the present paper
GLOBAL CHANGE REACTIVE BACKGROUND SUBTRACTION
Background subtraction is the technique of segmenting moving foreground objects from stationary or dynamic background scenes. Background subtraction is a critical step in many computer vision applications including video surveillance, tracking, gesture recognition etc. This thesis addresses the challenges associated with the background subtraction systems due to the sudden illumination changes happening in an indoor environment. Most of the existing techniques adapt to gradual illumination changes, but fail to cope with the sudden illumination changes. Here, we introduce a Global change reactive background subtraction to model these changes as a regression function of spatial image coordinates. The regression model is learned from highly probable background regions and the background model is compensated for the illumination changes by the model parameters estimated. Experiments were performed in the indoor environment to show the effectiveness of our approach in modeling the sudden illumination changes by a higher order regression polynomial. The results of non-linear SVM regression were also presented to show the robustness of our regression model
Linear color correction for multiple illumination changes and non-overlapping cameras
Many image processing methods, such as techniques for people re-identification, assume photometric constancy between different images. This study addresses the correction of photometric variations based upon changes in background areas to correct foreground areas. The authors assume a multiple light source model where all light sources can have different colours and will change over time. In training mode, the authors learn per-location relations between foreground and background colour intensities. In correction mode, the authors apply a double linear correction model based on learned relations. This double linear correction includes a dynamic local illumination correction mapping as well as an inter-camera mapping. The authors evaluate their illumination correction by computing the similarity between two images based on the earth mover's distance. The authors compare the results to a representative auto-exposure algorithm found in the recent literature plus a colour correction one based on the inverse-intensity chromaticity. Especially in complex scenarios the authors’ method outperforms these state-of-the-art algorithms
Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition
This paper presents a comparative study of two different methods, which are
based on fusion and polar transformation of visual and thermal images. Here,
investigation is done to handle the challenges of face recognition, which
include pose variations, changes in facial expression, partial occlusions,
variations in illumination, rotation through different angles, change in scale
etc. To overcome these obstacles we have implemented and thoroughly examined
two different fusion techniques through rigorous experimentation. In the first
method log-polar transformation is applied to the fused images obtained after
fusion of visual and thermal images whereas in second method fusion is applied
on log-polar transformed individual visual and thermal images. After this step,
which is thus obtained in one form or another, Principal Component Analysis
(PCA) is applied to reduce dimension of the fused images. Log-polar transformed
images are capable of handling complicacies introduced by scaling and rotation.
The main objective of employing fusion is to produce a fused image that
provides more detailed and reliable information, which is capable to overcome
the drawbacks present in the individual visual and thermal face images.
Finally, those reduced fused images are classified using a multilayer
perceptron neural network. The database used for the experiments conducted here
is Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database
benchmark thermal and visual face images. The second method has shown better
performance, which is 95.71% (maximum) and on an average 93.81% as correct
recognition rate.Comment: Proceedings of IEEE Workshop on Computational Intelligence in
Biometrics and Identity Management (IEEE CIBIM 2011), Paris, France, April 11
- 15, 201
Moving object detection using registration for a moving camera platform
In this research work, an accurate and fast moving object detector that can detect all the moving objects from Unmanned Aerial Platform (UAV) is proposed. Because of the distance of the UAV to the objects and the movement of the platform, object detection is a challenging task. In order to achieve best results with low error, at first the camera motion has to be estimated so, by using the Rosten and Drummond technique the corners is detected and then by using the corners the camera motion is compensated. After motion compensation, by subtracting the registered frame from the reference frame all the moving objects are detected and extracted
- …