10 research outputs found

    Color image registration under illumination changes

    Get PDF
    The estimation of parametric global motion has had a significant attention during the last two decades, but despite the great efforts invested, there are still open issues. One of the most important ones is related to the ability to recover large deformation between images in the presence of illumination changes while kipping accurate estimates. Illumination changes in color images are another important open issue. In this paper, a Generalized least squared-based motion estimator is used in combination with color image model to allow accurate estimates of global motion between two color images under the presence of large geometric transformation and illumination changes. Experiments using challenging images have been performed showing that the presented technique is feasible and provides accurate estimates of the motion and illumination parameter

    Real-time facial expression recognition with illumination-corrected image sequences

    Get PDF
    We present a real-time user-independent computer vision system that processes a sequence of images of a front-facing human face and recognizes a set of facial expressions at 30fps. We track the face using an efficient appearance-based face tracker. We model changes in illumination with a user independent appearance-based model. In our approach to facial expression classification, the image of a face is represented by a low dimensional vector that results from projecting the illumination corrected image onto a low dimensional expression manifold. In the experiments conducted we show that the system is able to recognize facial expressions in image sequences with large facial motion and illumination changes

    Generalized least squares-based parametric motion estimation and segmentation

    Get PDF
    El análisis del movimiento es uno de los campos más importantes de la visión por computador. Esto es debido a que el mundo real está en continuo movimiento y es obvio que podremos obtener mucha más información de escenas en movimiento que de escenas estáticas. En esta tesis se ha trabajado principalmente en desarrollar algoritmos de estimación de movimiento para su aplicación a problemas de registrado de imágenes y a problemas de segmentación del movimiento. Uno de los principales objetivos de este trabajo es desarrollar una técnica de registrado de imágenes de gran exactitud, tolerante a outliers y que sea capaz de realizar su labor incluso en la presencia de deformaciones de gran magnitud tales como traslaciones, rotaciones, cambios de escala, cambios de iluminación globales y no espacialmente uniformes, etc. Otro de los objetivos de esta tesis es trabajar en problemas de estimación y la segmentación del movimiento en secuencias de dos imágenes de forma casi simultánea y sin conocimiento a priori del número de modelos de movimiento presentes. Los experimentos mostrados en este trabajo demuestran que los algoritmos propuestos en esta tesis obtienen resultados de gran exactitud.This thesis proposes several techniques related with the motion estimation problem. In particular, it deals with global motion estimation for image registration and motion segmentation. In the first case, we will suppose that the majority of the pixels of the image follow the same motion model, although the possibility of a large number of outliers are also considered. In the motion segmentation problem, the presence of more than one motion model will be considered. In both cases, sequences of two consecutive grey level images will be used. A new generalized least squares-based motion estimator will be proposed. The proposed formulation of the motion estimation problem provides an additional constraint that helps to match the pixels using image gradient information. That is achieved thanks to the use of a weight for each observation, providing high weight values to the observations considered as inliers, and low values to the ones considered as outliers. To avoid falling in a local minimum, the proposed motion estimator uses a Feature-based method (SIFT-based) to obtain good initial motion parameters. Therefore, it can deal with large motions like translation, rotations, scales changes, viewpoint changes, etc. The accuracy of our approach has been tested using challenging real images using both affine and projective motion models. Two Motion Estimator techniques, which use M-Estimators to deal with outliers into a iteratively reweighted least squared-based strategy, have been selected to compare the accuracy of our approach. The results obtained have showed that the proposed motion estimator can obtain as accurate results as M-Estimator-based techniques and even better in most cases. The problem of estimating accurately the motion under non-uniform illumination changes will also be considered. A modification of the proposed global motion estimator will be proposed to deal with this kind of illumination changes. In particular, a dynamic image model where the illumination factors are functions of the localization will be used replacing the brightens constancy assumption allowing for a more general and accurate image model. Experiments using challenging images will be performed showing that the combination of both techniques is feasible and provides accurate estimates of the motion parameters even in the presence of strong illumination changes between the images. The last part of the thesis deals with the motion estimation and segmentation problem. The proposed algorithm uses temporal information, by using the proposed generalized least-squares motion estimation process and spatial information by using an iterative region growing algorithm which classifies regions of pixels into the different motion models present in the sequence. In addition, it can extract the different moving regions of the scene while estimating its motion quasi-simultaneously and without a priori information of the number of moving objects in the scene. The performance of the algorithm will be tested on synthetic and real images with multiple objects undergoing different types of motion

    Robust recognition of facial expressions on noise degraded facial images

    Get PDF
    Magister Scientiae - MScWe investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.South Afric

    Integrating Motion and Illumination Models for 3D Tracking”, in Computer Vision for Interactive and Intelligent Environments, Eds

    No full text
    One of the persistent challenges in computer vision has been tracking objects under varying lighting conditions. In this paper we present a method for estimation of 3D motion of a rigid object from a monocular video sequence under arbitrary changes in the illumination conditions under which the video was captured. This is achieved by alternately estimating motion and illumination parameters using a generative model for integrating the effects of motion, illumination and structure within a unified mathematical framework. The motion is represented in terms of translation and rotation of the object centroid, and the illumination is represented using a spherical harmonics linear basis. The method does not assume any model for the variation of the illumination conditions- lighting can change slowly or drastically. For the multi-camera tracking scenario, we propose a new photometric constraint that is valid over the overlapping field of view between two cameras. This is similar in nature to the well-known epipolar constraint, except that it relates the photometric parameters, and can provide an additional constraint for illumination invariant multi-camera tracking. We demonstrate the effectiveness of our tracking algorithm on single and multi-camera video sequences under severe changes of lighting conditions.
    corecore