30 research outputs found

    Investigations into colour constancy by bridging human and computer colour vision

    Get PDF
    PhD ThesisThe mechanism of colour constancy within the human visual system has long been of great interest to researchers within the psychophysical and image processing communities. With the maturation of colour imaging techniques for both scientific and artistic applications the importance of colour capture accuracy has consistently increased. Colour offers a great deal more information for the viewer than grayscale imagery, ranging from object detection to food ripeness and health estimation amongst many others. However these tasks rely upon the colour constancy process in order to discount scene illumination to allow these tasks to be carried out. Psychophysical studies have attempted to uncover the inner workings of this mechanism, which would allow it to be reproduced algorithmically. This would allow the development of devices which can eventually capture and perceive colour in the same manner as a human viewer. These two communities have approached this challenge from opposite ends, and as such very different and largely unconnected approaches. This thesis investigates the development of studies and algorithms which bridge the two communities. Utilising findings from psychophysical studies as inspiration to firstly improve an existing image enhancement algorithm. Results are then compared to state of the art methods. Then, using further knowledge, and inspiration, of the human visual system to develop a novel colour constancy approach. This approach attempts to mimic and replicate the mechanism of colour constancy by investigating the use of a physiological colour space and specific scene contents to estimate illumination. Performance of the colour constancy mechanism within the visual system is then also investigated. The performance of the mechanism across different scenes and commonly and uncommonly encountered illuminations is tested. The importance of being able to bridge these two communities, with a successful colour constancy method, is then further illustrated with a case study investigating the human visual perception of the agricultural produce of tomatoes.EPSRC DTA: Institute of Neuroscience, Newcastle University

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    Sabanci-Okan system at ImageClef 2011: plant identication task

    Get PDF
    We describe our participation in the plant identication task of ImageClef 2011. Our approach employs a variety of texture, shape as well as color descriptors. Due to the morphometric properties of plants, mathematical morphology has been advocated as the main methodology for texture characterization, supported by a multitude of contour-based shape and color features. We submitted a single run, where the focus has been almost exclusively on scan and scan-like images, due primarily to lack of time. Moreover, special care has been taken to obtain a fully automatic system, operating only on image data. While our photo results are low, we consider our submission successful, since besides being our rst attempt, our accuracy is the highest when considering the average of the scan and scan-like results, upon which we had concentrated our eorts

    3D panoramic imaging for virtual environment construction

    Get PDF
    The project is concerned with the development of algorithms for the creation of photo-realistic 3D virtual environments, overcoming problems in mosaicing, colour and lighting changes, correspondence search speed and correspondence errors due to lack of surface texture. A number of related new algorithms have been investigated for image stitching, content based colour correction and efficient 3D surface reconstruction. All of the investigations were undertaken by using multiple views from normal digital cameras, web cameras and a ”one-shot” panoramic system. In the process of 3D reconstruction a new interest points based mosaicing method, a new interest points based colour correction method, a new hybrid feature and area based correspondence constraint and a new structured light based 3D reconstruction method have been investigated. The major contributions and results can be summarised as follows: • A new interest point based image stitching method has been proposed and investigated. The robustness of interest points has been tested and evaluated. Interest points have been proved robust to changes in lighting, viewpoint, rotation and scale. • A new interest point based method for colour correction has been proposed and investigated. The results of linear and linear plus affine colour transforms have proved more accurate than traditional diagonal transforms in accurately matching colours in panoramic images. • A new structured light based method for correspondence point based 3D reconstruction has been proposed and investigated. The method has been proved to increase the accuracy of the correspondence search for areas with low texture. Correspondence speed has also been increased with a new hybrid feature and area based correspondence search constraint. • Based on the investigation, a software framework has been developed for image based 3D virtual environment construction. The GUI includes abilities for importing images, colour correction, mosaicing, 3D surface reconstruction, texture recovery and visualisation. • 11 research papers have been published.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Computing von Kries Illuminant Changes by Piecewise Inversion of Cumulative Color Histograms

    Get PDF
    We present a linear algorithm for the computation of the illuminant change occurring between two color pictures of a scene. We model the light variations with the von Kries diagonal transform and we estimate it by minimizing a dissimilarity measure between the piecewise inversions of the cumulative color histograms of the considered images. We also propose a method for illuminant invariant image recognition based on our von Kries transform estimate

    Non-parametric Methods for Automatic Exposure Control, Radiometric Calibration and Dynamic Range Compression

    Get PDF
    Imaging systems are essential to a wide range of modern day applications. With the continuous advancement in imaging systems, there is an on-going need to adapt and improve the imaging pipeline running inside the imaging systems. In this thesis, methods are presented to improve the imaging pipeline of digital cameras. Here we present three methods to improve important phases of the imaging process, which are (i) ``Automatic exposure adjustment'' (ii) ``Radiometric calibration'' (iii) ''High dynamic range compression''. These contributions touch the initial, intermediate and final stages of imaging pipeline of digital cameras. For exposure control, we propose two methods. The first makes use of CCD-based equations to formulate the exposure control problem. To estimate the exposure time, an initial image was acquired for each wavelength channel to which contrast adjustment techniques were applied. This helps to recover a reference cumulative distribution function of image brightness at each channel. The second method proposed for automatic exposure control is an iterative method applicable for a broad range of imaging systems. It uses spectral sensitivity functions such as the photopic response functions for the generation of a spectral power image of the captured scene. A target image is then generated using the spectral power image by applying histogram equalization. The exposure time is hence calculated iteratively by minimizing the squared difference between target and the current spectral power image. Here we further analyze the method by performing its stability and controllability analysis using a state space representation used in control theory. The applicability of the proposed method for exposure time calculation was shown on real world scenes using cameras with varying architectures. Radiometric calibration is the estimate of the non-linear mapping of the input radiance map to the output brightness values. The radiometric mapping is represented by the camera response function with which the radiance map of the scene is estimated. Our radiometric calibration method employs an L1 cost function by taking advantage of Weisfeld optimization scheme. The proposed calibration works with multiple input images of the scene with varying exposure. It can also perform calibration using a single input with few constraints. The proposed method outperforms, quantitatively and qualitatively, various alternative methods found in the literature of radiometric calibration. Finally, to realistically represent the estimated radiance maps on low dynamic range display (LDR) devices, we propose a method for dynamic range compression. Radiance maps generally have higher dynamic range (HDR) as compared to the widely used display devices. Thus, for display purposes, dynamic range compression is required on HDR images. Our proposed method generates few LDR images from the HDR radiance map by clipping its values at different exposures. Using contrast information of each LDR image generated, the method uses an energy minimization approach to estimate the probability map of each LDR image. These probability maps are then used as label set to form final compressed dynamic range image for the display device. The results of our method were compared qualitatively and quantitatively with those produced by widely cited and professionally used methods

    Person Re-identification Using Spatial Covariance Regions of Human Body Parts

    Get PDF
    International audienceIn many surveillance systems there is a requirement to determine whether a given person of interest has already been observed over a network of cameras. This is the person re-identification problem. The human appearance obtained in one camera is usually different from the ones obtained in another camera. In order to re-identify people the human signature should handle difference in illumination, pose and camera parameters. We propose a new appearance model based on spatial covariance regions extracted from human body parts. The new spatial pyramid scheme is applied to capture the correlation between human body parts in order to obtain a discriminative human signature. The human body parts are automatically detected using Histograms of Oriented Gradients (HOG). The method is evaluated using benchmark video sequences from i-LIDS Multiple-Camera Tracking Scenario data set. The re-identification performance is presented using the cumulative matching characteristic (CMC) curve. Finally, we show that the proposed approach outperforms state of the art methods

    Suivi dans des séquences d'images par coopération luminance/couleur

    Get PDF
    Cet article propose une technique de suivi de points différentielle robuste aux changements d'illumination par la coopération d'attributs couleur invariants et d'une normalisation photométrique en fonction du contexte. En effet, la plupart des invariants couleur s'avèrent bruités ou peu pertinents dans le cas d'une faible saturation et/ou d'une faible intensité, mettant le suivi en échec. Les combiner alors avec une information de luminance aboutit à un suivi plus performant quelles que soient les conditions d'éclairage. Quelques expérimentations prouvent la robustesse et la précision de cette approche

    Retina-Inspired and Physically Based Image Enhancement

    Get PDF
    Images and videos with good lightness and contrast are vital in several applications, where human experts make an important decision based on the imaging information, such as medical, security, and remote sensing applications. The well-known image enhancement methods include spatial and frequency enhancement techniques such as linear transformation, gamma correction, contrast stretching, histogram equalization and homomorphic filtering. Those conventional techniques are easy to implement but do not recover the exact colour of the images; hence they have limited application areas. Conventional image/video enhancement methods have been widely used with their different advantages and drawbacks; since the last century, there has been increased interest in retina-inspired techniques, e.g., Retinex and Cellular Neural Networks (CNN) as they attempt to mimic the human retina. Despite considerable advances in computer vision techniques, the human eye and visual cortex by far supersede the performance of state-of-the-art algorithms. This research aims to propose a retinal network computational model for image enhancement that mimics retinal layers, targeting the interconnectivity between the Bipolar receptive field and the Ganglion receptive field. The research started by enhancing two state-of-the-art image enhancement methods through their integration with image formation models. In particular, physics-based features (e.g. Spectral Power Distribution of the dominant illuminate in the scene and the Surface Spectral Reflectance of the objects contained in the image are estimated and used as inputs for the enhanced methods). The results show that the proposed technique can adapt to scene variations such as a change in illumination, scene structure, camera position and shadowing. It gives superior performance over the original model. The research has successfully proposed a novel Ganglion Receptive Field (GRF) computational model for image enhancement. Instead of considering only the interactions between each pixel and its surroundings within a single colour layer, the proposed framework introduces the interaction between different colour layers to mimic the retinal neural process; to better mimic the centre-surround retinal receptive field concept, different photoreceptors' outputs are combined. Additionally, this thesis proposed a new contrast enhancement method based on Weber's Law. The objective evaluation shows the superiority of the proposed Ganglion Receptive Field (GRF) method over state-of-the-art methods. The contrast restored image generated by the GRF method achieved the highest performance in contrast enhancement and luminance restoration; however, it achieved less performance in structure preservation, which confirms the physiological studies that observe the same behaviour from the human visual system
    corecore