8 research outputs found

    Skin Colour Imaging that is Insensitive to Lighting Conditions

    Get PDF
    In previous human skin models, it has been suggested that the colour of human skin is mostly determined by the concentration of melanin in the epidermal layer combined with the concentration of hemoglobin in the dermal layer. The colour of facial skin changes significantly with changes in the light incident upon it. In this paper we propose a method of normalizing the skin tones of human faces that eliminates the effects of illumination, preserving the skin colour and allowing variations related to melanin concentration only. The method assumes the illumination is reasonably well modelled as blackbody radiation

    Estimation of illuminants from color signals of illuminated objects

    Get PDF
    Color constancy is the ability of the human visual systems to discount the effect of the illumination and to assign approximate constant color descriptions to objects. This ability has long been studied and widely applied to many areas such as color reproduction and machine vision, especially with the development of digital color processing. This thesis work makes some improvements in illuminant estimation and computational color constancy based on the study and testing of existing algorithms. During recent years, it has been noticed that illuminant estimation based on gamut comparison is efficient and simple to implement. Although numerous investigations have been done in this field, there are still some deficiencies. A large part of this thesis has been work in the area of illuminant estimation through gamut comparison. Noting the importance of color lightness in gamut comparison, and also in order to simplify three-dimensional gamut calculation, a new illuminant estimation method is proposed through gamut comparison at separated lightness levels. Maximum color separation is a color constancy method which is based on the assumption that colors in a scene will obtain the largest gamut area under white illumination. The method was further derived and improved in this thesis to make it applicable and efficient. In addition, some intrinsic questions in gamut comparison methods, for example the relationship between the color space and the application of gamut or probability distribution, were investigated. Color constancy methods through spectral recovery have the limitation that there is no effective way to confine the range of object spectral reflectance. In this thesis, a new constraint on spectral reflectance based on the relative ratios of the parameters from principal component analysis (PCA) decomposition is proposed. The proposed constraint was applied to illuminant detection methods as a metric on the recovered spectral reflectance. Because of the importance of the sensor sensitivities and their wide variation, the influence from the sensor sensitivities on different kinds of illuminant estimation methods was also studied. Estimation method stability to wrong sensor information was tested, suggesting the possible solution to illuminant estimation on images with unknown sources. In addition, with the development of multi-channel imaging, some research on illuminant estimation for multi-channel images both on the correlated color temperature (CCT) estimation and the illuminant spectral recovery was performed in this thesis. All the improvement and new proposed methods in this thesis are tested and compared with those existing methods with best performance, both on synthetic data and real images. The comparison verified the high efficiency and implementation simplicity of the proposed methods

    Outdoor computer vision and weed control

    Get PDF

    Image Processing Using Sensor Noise and Human Visual System Models

    Full text link
    Because digital images are subject to noise in the device that captured them and the human visual system (HVS) that observes them, it is important to consider accurate models for noise and the HVS in the design of image processing methods. In this thesis, CMOS image sensor noise is characterized, the chromatic adaptation theories are reviewed, and new image processing algorithms that address these noise and HVS models are presented. First, a method for removing additive, multiplicative, and mixed noise from an image is developed. An image patch from an ideal image is modeled as a linear combination of image patches from the noisy image. This image model is fit to the image data in the total least square (TLS) sense, because it allows uncertainties in the measured data. The image quality of the output image demonstrates the effectiveness of the TLS algorithms and improvement over existing methods. Second, we develop a novel technique to combine demosaicing and denoising procedures systematically into a single operation. We first design a filter as optimally estimating a pixel value from a noisy single-color image. With additional constraints, we show that the same filter coefficients are appropriate for demosaicing noisy sensor data. The proposed technique can combine many existing denoising algorithms with the demosaicing operation. The algorithm is tested with pseudo-random noise and noisy raw sensor data from a real digital camera, and the proposed method suppresses CMOS image sensor noise while effectively interpolating the missing pixel components better than when treating demosaicing and denoising problems independently. Third, the problem of adjusting the color to match the digital camera output with the scene observed by the photographer?s eye is called white-balance. While most existing white-balance algorithms combine the von Kries coefficient law and an illuminant estimation techniques, the coefficient law has been shown to be an inaccurate model. We instead formulate the problem using induced opponent response theory, the solution to which reduces to a single matrix multiplication. The experimental results verify that this approach yields more natural images than traditional methods. The computational cost of the proposed method is virtually zero.Texas Instruments, Agilent Technologies, Center for Electronic Imaging System

    Análisis de expresiones faciales mediante visión por computador

    Full text link
    Las expresiones del rostro son una componente esencial de los procesos de comunicación entre los seres humanos. No es sorprendente, por tanto, que la cuantificación de las expresiones faciales constituya un tema de interés en el campo de la visión por computador por su utilidad en la construcción de interfaces avanzadas de comunicación con el ordenador. También es un tema de interés en el campo de la animación, área en la que confluyen las técnicas gráficas y la visión. La cara es un objeto difícil de analizar mediante técnicas de Visión por Computador pues presenta amplias zonas con poca textura y sus regiones más expresivas (ojos, cejas y boca) sufren deformaciones no rígidas. Si a esto le añadimos la dificultad de modelizar y predecir el movimiento de la cabeza junto a la existencia de oclusiones o de condiciones cambiantes de iluminación, entenderemos por qué el análisis del rostro mediante Visión por Computador es un problema difícil que hoy en día continúa abierto. En la presente tesis desarrollamos un sistema de análisis facial que nos permitirá encontrar y seguir el rostro humano así como cuantificar sus expresiones faciales. Para el seguimiento utilizaremos una arquitectura basada en el ""Enfoque Gradual de la Atención"". Esta arquitectura está formada por un conjunto de seguidores con distintos niveles de precisión y de carga computacional. El seguidor menos preciso consiste en buscar la cara aleatoriamente en la imagen, prestando atención a regiones con un color parecido al de la piel. Una vez tenemos una región candidata, se emplea la textura de la cara de la persona a seguir para localizar de forma más precisa su cabeza. Un seguidor basado en la apariencia, entrenado con la cara del usuario, nos permitirá tratar el movimiento no rígido del rostro y, al mismo tiempo, estimar el movimiento rígido del conjunto. El sistema resultante controla el coste computacional y la precisión en el seguimiento, empleando un seguidor menos preciso y de menor coste cuando las condiciones del entorno se degradan, y aumentando la precisión cuando mejoran. Finalmente, como aplicación práctica, emplearemos los parámetros de movimiento estimados para animar un modelo gráfico 3D. Para afrontar el problema de los cambios de iluminación, se ha propuesto un procedimiento de normalización de color basado en un conocido algoritmo de constancia de color para escenas estáticas, el algoritmo Grey World, pero extendido al caso de secuencias de imágenes. Con el nuevo desarrollo se pueden seguir objetos a partir de su color, con un buen grado de robustez a los cambios de iluminación. Además, se ha desarrollado un seguidor eficiente basado en diferencias en los niveles de gris (SSD) que puede estimar la posición y orientación en 3D de un plano mediante el empleo de un modelo de movimiento proyectivo. Está basado en la extensión al caso proyectivo de la idea de la factorización del Jacobiano de Hager y Belhumeur. También se ha desarrollado un procedimiento de elección del conjunto de píxeles más informativo de la plantilla de seguimiento para incrementar aún más el rendimiento del seguidor. Finalmente, se ha desarrollado una técnica de factorización del Jacobiano para resolver el problema de seguimiento de objetos deformables. Usando nuestro algoritmo es posible seguir el movimiento no rígido de las zonas de la cara en tiempo real. El algoritmo resultante es interesante no sólo por su eficiencia computacional, sino también porque es más fácil de entrenar que los bien conocidos Modelos Activos de Apariencia (AAM

    Investigations into colour constancy by bridging human and computer colour vision

    Get PDF
    PhD ThesisThe mechanism of colour constancy within the human visual system has long been of great interest to researchers within the psychophysical and image processing communities. With the maturation of colour imaging techniques for both scientific and artistic applications the importance of colour capture accuracy has consistently increased. Colour offers a great deal more information for the viewer than grayscale imagery, ranging from object detection to food ripeness and health estimation amongst many others. However these tasks rely upon the colour constancy process in order to discount scene illumination to allow these tasks to be carried out. Psychophysical studies have attempted to uncover the inner workings of this mechanism, which would allow it to be reproduced algorithmically. This would allow the development of devices which can eventually capture and perceive colour in the same manner as a human viewer. These two communities have approached this challenge from opposite ends, and as such very different and largely unconnected approaches. This thesis investigates the development of studies and algorithms which bridge the two communities. Utilising findings from psychophysical studies as inspiration to firstly improve an existing image enhancement algorithm. Results are then compared to state of the art methods. Then, using further knowledge, and inspiration, of the human visual system to develop a novel colour constancy approach. This approach attempts to mimic and replicate the mechanism of colour constancy by investigating the use of a physiological colour space and specific scene contents to estimate illumination. Performance of the colour constancy mechanism within the visual system is then also investigated. The performance of the mechanism across different scenes and commonly and uncommonly encountered illuminations is tested. The importance of being able to bridge these two communities, with a successful colour constancy method, is then further illustrated with a case study investigating the human visual perception of the agricultural produce of tomatoes.EPSRC DTA: Institute of Neuroscience, Newcastle University
    corecore