140 research outputs found

    Computational mechanisms for colour and lightness constancy

    Get PDF
    Attributes of colour images have been found which allow colour and lightness constancy to be computed without prior knowledge of the illumination, even in complex scenes with three -dimensional objects and multiple light sources of different colours. The ratio of surface reflectance colour can be immediately determined between any two image points, however distant. It is possible to determine the number of spectrally independent light sources, and to isolate the effect of each. Reflectance edges across which the illumination remains constant can be correctly identified.In a scene illuminated by multiple distant point sources of distinguishalbe colours, the spatial angle between the sources and their brightness ratios can be computed from the image alone. If there are three or more sources then reflectance constancy is immediately possible without use of additional knowledge.The results are an extension of Edwin Land's Retinex algorithm. They account for previously unexplained data such as Gilchrist's veiling luminances and his single- colour rooms.The validity of the algorithms has been demonstrated by implementing them in a series of computer programs. The computational methods do not follow the edge or region finding paradigms of previous vision mechanisms. Although the new reflectance constancy cues occur in all normal scenes, it is likely that human vision makes use of only some of them.In a colour image all the pixels of a single surface colour lie in a single structure in flux space. The dimension of the structure equals the number of illumination colours. The reflectance ratio between two regions is determined by the transformation between their structures. Parallel tracing of edge pairs in their respective structures identifies an edge of constant illumination, and gives the lightness ratio of each such edge. Enhanced noise reduction techniques for colour pictures follow from the natural constraints on the flux structures

    Colour constancy beyond the classical receptive field

    Get PDF
    The problem of removing illuminant variations to preserve the colours of objects (colour constancy) has already been solved by the human brain using mechanisms that rely largely on centre-surround computations of local contrast. In this paper we adopt some of these biological solutions described by long known physiological findings into a simple, fully automatic, functional model (termed Adaptive Surround Modulation or ASM). In ASM, the size of a visual neuron's receptive field (RF) as well as the relationship with its surround varies according to the local contrast within the stimulus, which in turn determines the nature of the centre-surround normalisation of cortical neurons higher up in the processing chain. We modelled colour constancy by means of two overlapping asymmetric Gaussian kernels whose sizes are adapted based on the contrast of the surround pixels, resembling the change of RF size. We simulated the contrast-dependent surround modulation by weighting the contribution of each Gaussian according to the centre-surround contrast. In the end, we obtained an estimation of the illuminant from the set of the most activated RFs' outputs. Our results on three single-illuminant and one multi-illuminant benchmark datasets show that ASM is highly competitive against the state-of-the-art and it even outperforms learning-based algorithms in one case. Moreover, the robustness of our model is more tangible if we consider that our results were obtained using the same parameters for all datasets, that is, mimicking how the human visual system operates. These results suggest a dynamical adaptation mechanisms contribute to achieving higher accuracy in computational colour constancy

    The Hyper-log-chromaticity space for illuminant invariance

    Get PDF
    Variation in illumination conditions through a scene is a common issue for classification, segmentation and recognition applications. Traffic monitoring and driver assistance systems have difficulty with the changing illumination conditions at night, throughout the day, with multiple sources (especially at night) and in the presence of shadows. The majority of existing algorithms for color constancy or shadow detection rely on multiple frames for comparison or to build a background model. The proposed approach uses a novel color space inspired by the Log-Chromaticity space and modifies the bilateral filter to equalize illumination across objects using a single frame. Neighboring pixels of the same color, but of different brightness, are assumed to be of the same object/material. The utility of the algorithm is studied over day and night simulated scenes of varying complexity. The objective is not to provide a product for visual inspection but rather an alternate image with fewer illumination related issues for other algorithms to process. The usefulness of the filter is demonstrated by applying two simple classifiers and comparing the class statistics. The hyper-log-chromaticity image and the filtered image both improve the quality of the classification relative to the un-processed image

    Camera Sensor Invariant Auto White Balance Algorithm Weighting

    Get PDF
    Color constancy is the ability to perceive colors of objects, invariant to the color of the light source. The aim for color constancy algorithms is first to estimate the illuminant of the light source, and then correct the image so that the corrected image appears to be taken under a canonical light source. The task of the automatic white balance (AWB) is to do the same in digital cameras so that the images taken by a digital camera look as natural as possible. The main challenge rises due to the illposed nature of the problem, that is both the spectral distribution of the illuminant and the scene reflectance are unknown. Most common methods used for addressing the AWB problem are based on low-level statistics assuming that illuminant information can be extracted from the image’s spatial information. Nevertheless, in recent studies the problem has been approached with machine learning techniques quite often and they have been proved to be very useful. In this thesis, we investigate learning color constancy using artificial neural networks (ANNs). Two different artificial neural network approaches are utilized to generate a new AWB algorithm by weighting some of the existing AWB algorithms. The first approach proves to be better than the existing approaches in terms of median error. On the other hand, the second method, which is better also from system design point of view, is superior to others including the first approach in terms of mean and median error. Furthermore, we also analyze camera sensor invariance by quantifying how much the performance of the ANNs degrade when the test sensor is different than the training sensor

    The Computation of Surface Lightness in Simple and Complex Scenes

    Get PDF
    The present thesis examined how reflectance properties and the complexity of surface mesostructure (small-scale surface relief) influence perceived lightness in centresurround displays. Chapters 2 and 3 evaluated the role of surface relief, gloss, and interreflections on lightness constancy, which was examined across changes in background albedo and illumination level. For surfaces with visible mesostructure (“rocky” surfaces), lightness constancy across changes in background albedo was better for targets embedded in glossy versus matte surfaces. However, this improved lightness constancy for gloss was not observed when illumination varied. Control experiments compared the matte and glossy rocky surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum. Lightness constancy was improved for rocky glossy displays over the histogram-matched displays, but not compared to phase-scrambled variants of these images with equated power spectrums. The results were similar for surfaces rendered with 1, 2, 3 and 4 interreflections. These results suggest that lightness perception in complex centre-surround displays can be explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity. The results for surfaces without surface relief (“homogeneous” surfaces) differed qualitatively to rocky surfaces, exhibiting abrupt steps in perceived lightness at points at which the targets transitioned from being increments to decrements. Chapter 4 examined whether homogeneous displays evoke more complex mid-level representations similar to conditions of transparency. Matching target lightness in a homogeneous display to that in a textured or rocky display required varying both lightness and transmittance of the test patch on the textured display to obtain the most satisfactory matches. However, transmittance was only varied to match the contrast of targets against homogeneous surrounds, and not to explicitly match the amount of transparency perceived in the displays. The results suggest perceived target-surround edge contrast differs between homogeneous and textured displays. Varying the mid-level property of transparency in textured displays provides a natural means for equating both target lightness and the unique appearance of the edge contrast in homogeneous displays

    The Computation of Surface Lightness in Simple and Complex Scenes

    Get PDF
    The present thesis examined how reflectance properties and the complexity of surface mesostructure (small-scale surface relief) influence perceived lightness in centresurround displays. Chapters 2 and 3 evaluated the role of surface relief, gloss, and interreflections on lightness constancy, which was examined across changes in background albedo and illumination level. For surfaces with visible mesostructure (“rocky” surfaces), lightness constancy across changes in background albedo was better for targets embedded in glossy versus matte surfaces. However, this improved lightness constancy for gloss was not observed when illumination varied. Control experiments compared the matte and glossy rocky surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum. Lightness constancy was improved for rocky glossy displays over the histogram-matched displays, but not compared to phase-scrambled variants of these images with equated power spectrums. The results were similar for surfaces rendered with 1, 2, 3 and 4 interreflections. These results suggest that lightness perception in complex centre-surround displays can be explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity. The results for surfaces without surface relief (“homogeneous” surfaces) differed qualitatively to rocky surfaces, exhibiting abrupt steps in perceived lightness at points at which the targets transitioned from being increments to decrements. Chapter 4 examined whether homogeneous displays evoke more complex mid-level representations similar to conditions of transparency. Matching target lightness in a homogeneous display to that in a textured or rocky display required varying both lightness and transmittance of the test patch on the textured display to obtain the most satisfactory matches. However, transmittance was only varied to match the contrast of targets against homogeneous surrounds, and not to explicitly match the amount of transparency perceived in the displays. The results suggest perceived target-surround edge contrast differs between homogeneous and textured displays. Varying the mid-level property of transparency in textured displays provides a natural means for equating both target lightness and the unique appearance of the edge contrast in homogeneous displays

    Physics-Based and Retina-Inspired Technique for Image Enhancement

    Get PDF
    This paper develops a novel image/video enhancement technique that integrates a physics-based image formation model, the dichromatic model, with a retina-inspired computational model, multiscale model of adaptation. In particular, physics-based features (e.g. Power Spectral Distribution of the dominant illuminant in the scene and the Surface Spectral Reflectance of the objects contained in the image are estimated and are used as inputs to the multiscale model for adaptation. The results show that our technique can adapt itself to scene variations such as a change in illumination, scene structure, camera position and shadowing and gives superior performance over the original model

    The Constructive Nature of Color Vision and Its Neural Basis

    Get PDF
    Our visual world is made up of colored surfaces. The color of a surface is physically determined by its reflectance, i.e., how much energy it reflects as a function of wavelength. Reflected light, however, provides only ambiguous information about the color of a surface as it depends on the spectral properties of both the surface and the illumination. Despite the confounding effects of illumination on the reflected light, the visual system is remarkably good at inferring the reflectance of a surface, enabling observers to perceive surface colors as stable across illumination changes. This capacity of the visual system is called color constancy and it highlights that color vision is a constructive process. The research presented here investigates the neural basis of some of the most relevant aspects of the constructive nature of human color vision using machine learning algorithms and functional neuroimaging. The experiments demonstrate that color-related prior knowledge influences neural signals already in the earliest area of visual processing in the cortex, area V1, whereas in object imagery, perceived color shared neural representations with the color of the imagined objects in human V4. A direct test for illumination-invariant surface color representation showed that neural coding in V1 as well as a region anterior to human V4 was robust against illumination changes. In sum, the present research shows how different aspects of the constructive nature of color vision can be mapped to different regions in the ventral visual pathway

    Retina-Inspired and Physically Based Image Enhancement

    Get PDF
    Images and videos with good lightness and contrast are vital in several applications, where human experts make an important decision based on the imaging information, such as medical, security, and remote sensing applications. The well-known image enhancement methods include spatial and frequency enhancement techniques such as linear transformation, gamma correction, contrast stretching, histogram equalization and homomorphic filtering. Those conventional techniques are easy to implement but do not recover the exact colour of the images; hence they have limited application areas. Conventional image/video enhancement methods have been widely used with their different advantages and drawbacks; since the last century, there has been increased interest in retina-inspired techniques, e.g., Retinex and Cellular Neural Networks (CNN) as they attempt to mimic the human retina. Despite considerable advances in computer vision techniques, the human eye and visual cortex by far supersede the performance of state-of-the-art algorithms. This research aims to propose a retinal network computational model for image enhancement that mimics retinal layers, targeting the interconnectivity between the Bipolar receptive field and the Ganglion receptive field. The research started by enhancing two state-of-the-art image enhancement methods through their integration with image formation models. In particular, physics-based features (e.g. Spectral Power Distribution of the dominant illuminate in the scene and the Surface Spectral Reflectance of the objects contained in the image are estimated and used as inputs for the enhanced methods). The results show that the proposed technique can adapt to scene variations such as a change in illumination, scene structure, camera position and shadowing. It gives superior performance over the original model. The research has successfully proposed a novel Ganglion Receptive Field (GRF) computational model for image enhancement. Instead of considering only the interactions between each pixel and its surroundings within a single colour layer, the proposed framework introduces the interaction between different colour layers to mimic the retinal neural process; to better mimic the centre-surround retinal receptive field concept, different photoreceptors' outputs are combined. Additionally, this thesis proposed a new contrast enhancement method based on Weber's Law. The objective evaluation shows the superiority of the proposed Ganglion Receptive Field (GRF) method over state-of-the-art methods. The contrast restored image generated by the GRF method achieved the highest performance in contrast enhancement and luminance restoration; however, it achieved less performance in structure preservation, which confirms the physiological studies that observe the same behaviour from the human visual system
    corecore