56 research outputs found

    Investigations into colour constancy by bridging human and computer colour vision

    Get PDF
    PhD ThesisThe mechanism of colour constancy within the human visual system has long been of great interest to researchers within the psychophysical and image processing communities. With the maturation of colour imaging techniques for both scientific and artistic applications the importance of colour capture accuracy has consistently increased. Colour offers a great deal more information for the viewer than grayscale imagery, ranging from object detection to food ripeness and health estimation amongst many others. However these tasks rely upon the colour constancy process in order to discount scene illumination to allow these tasks to be carried out. Psychophysical studies have attempted to uncover the inner workings of this mechanism, which would allow it to be reproduced algorithmically. This would allow the development of devices which can eventually capture and perceive colour in the same manner as a human viewer. These two communities have approached this challenge from opposite ends, and as such very different and largely unconnected approaches. This thesis investigates the development of studies and algorithms which bridge the two communities. Utilising findings from psychophysical studies as inspiration to firstly improve an existing image enhancement algorithm. Results are then compared to state of the art methods. Then, using further knowledge, and inspiration, of the human visual system to develop a novel colour constancy approach. This approach attempts to mimic and replicate the mechanism of colour constancy by investigating the use of a physiological colour space and specific scene contents to estimate illumination. Performance of the colour constancy mechanism within the visual system is then also investigated. The performance of the mechanism across different scenes and commonly and uncommonly encountered illuminations is tested. The importance of being able to bridge these two communities, with a successful colour constancy method, is then further illustrated with a case study investigating the human visual perception of the agricultural produce of tomatoes.EPSRC DTA: Institute of Neuroscience, Newcastle University

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    Colour Constancy using K-means Clustering Algorithm

    Get PDF
    Colour cast is the ambient presence of unwanted colour in digital images due to the source illuminant while colour constancy is the ability to perceive colors of object, invariant to the colour of the source illuminant. Existing statistic based colour constancy methods use whole image pixel values for illuminant estimation. However, not every region of an image contains reliable colour information. Therefore, the presence of large uniform colour patches within the image considerably deteriorates the performance of colour constancy algorithms. This paper presents an algorithm to alleviate the biasing effect of the uniform colour patches of the colour constancy compensation techniques. It employs the k-means clustering algorithm to segment image areas according to their colour information. The Average Absolute Difference (AAD) of each colour component of the segment is calculated and used to identify and exclude segments with uniform colour information from being used for colour constancy adjustments. Experimental results were generated using three benchmark datasets and compared with the state of the art techniques. Results show the proposed technique outperforms existing techniques in the presence of the uniform colour patches and similar to Grey World method in the absent o uniform colour patches

    Deep Convolutional Attention based Bidirectional Recurrent Neural Network for Measuring Correlated Colour Temperature from RGB images

    Get PDF
    Information on the connected colour temperature, which affects the image due to the surrounding illumination, is critical, particularly for natural lighting and capturing images. Several methods are introduced to detect colour temperature precisely; however, the majority of them are difficult to use or may generate internal noise. To address these issues, this research developed a hybrid deep model that properly measures temperature from RGB images while reducing noise. The proposed study includes image collection, pre-processing, feature extraction and CCT evaluation. The input RGB pictures are initially generated in the CIE 1931 colour space. After that, the raw input samples are pre-processed to improve picture quality by performing image cropping and scaling, denoising by hybrid median-wiener filtering and contrast enhancement via Rectified Gamma-based Quadrant Dynamic Clipped Histogram Equalisation (RG_QuaDy_CHE). The colour and texture features are eliminated during pre-processing to obtain the relevant CCT-based information. The Local Intensity Grouping Order Pattern (LIGOP) operator extracts the texture properties. In contrast, the colour properties are extracted using the RGB colour space’s mean, standard deviation, skewness, energy, smoothness and variance. Finally, using the collected features, the CCT values from the submitted images are estimated using a unique Deep Convolutional Attention-based Bidirectional Recurrent Neural Network (DCA_BRNNet) model. The Coati Optimisation Algorithm (COA) is used to improve the performance of a recommended classifier by modifying its parameters. In the Result section, the suggested model is compared to various current techniques, obtaining an MAE value of 529K and an RMSE value of 587K, respectively

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device

    Computing von Kries Illuminant Changes by Piecewise Inversion of Cumulative Color Histograms

    Get PDF
    We present a linear algorithm for the computation of the illuminant change occurring between two color pictures of a scene. We model the light variations with the von Kries diagonal transform and we estimate it by minimizing a dissimilarity measure between the piecewise inversions of the cumulative color histograms of the considered images. We also propose a method for illuminant invariant image recognition based on our von Kries transform estimate

    3D panoramic imaging for virtual environment construction

    Get PDF
    The project is concerned with the development of algorithms for the creation of photo-realistic 3D virtual environments, overcoming problems in mosaicing, colour and lighting changes, correspondence search speed and correspondence errors due to lack of surface texture. A number of related new algorithms have been investigated for image stitching, content based colour correction and efficient 3D surface reconstruction. All of the investigations were undertaken by using multiple views from normal digital cameras, web cameras and a ”one-shot” panoramic system. In the process of 3D reconstruction a new interest points based mosaicing method, a new interest points based colour correction method, a new hybrid feature and area based correspondence constraint and a new structured light based 3D reconstruction method have been investigated. The major contributions and results can be summarised as follows: • A new interest point based image stitching method has been proposed and investigated. The robustness of interest points has been tested and evaluated. Interest points have been proved robust to changes in lighting, viewpoint, rotation and scale. • A new interest point based method for colour correction has been proposed and investigated. The results of linear and linear plus affine colour transforms have proved more accurate than traditional diagonal transforms in accurately matching colours in panoramic images. • A new structured light based method for correspondence point based 3D reconstruction has been proposed and investigated. The method has been proved to increase the accuracy of the correspondence search for areas with low texture. Correspondence speed has also been increased with a new hybrid feature and area based correspondence search constraint. • Based on the investigation, a software framework has been developed for image based 3D virtual environment construction. The GUI includes abilities for importing images, colour correction, mosaicing, 3D surface reconstruction, texture recovery and visualisation. • 11 research papers have been published.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Sabanci-Okan system at ImageClef 2011: plant identication task

    Get PDF
    We describe our participation in the plant identication task of ImageClef 2011. Our approach employs a variety of texture, shape as well as color descriptors. Due to the morphometric properties of plants, mathematical morphology has been advocated as the main methodology for texture characterization, supported by a multitude of contour-based shape and color features. We submitted a single run, where the focus has been almost exclusively on scan and scan-like images, due primarily to lack of time. Moreover, special care has been taken to obtain a fully automatic system, operating only on image data. While our photo results are low, we consider our submission successful, since besides being our rst attempt, our accuracy is the highest when considering the average of the scan and scan-like results, upon which we had concentrated our eorts

    Non-parametric Methods for Automatic Exposure Control, Radiometric Calibration and Dynamic Range Compression

    Get PDF
    Imaging systems are essential to a wide range of modern day applications. With the continuous advancement in imaging systems, there is an on-going need to adapt and improve the imaging pipeline running inside the imaging systems. In this thesis, methods are presented to improve the imaging pipeline of digital cameras. Here we present three methods to improve important phases of the imaging process, which are (i) ``Automatic exposure adjustment'' (ii) ``Radiometric calibration'' (iii) ''High dynamic range compression''. These contributions touch the initial, intermediate and final stages of imaging pipeline of digital cameras. For exposure control, we propose two methods. The first makes use of CCD-based equations to formulate the exposure control problem. To estimate the exposure time, an initial image was acquired for each wavelength channel to which contrast adjustment techniques were applied. This helps to recover a reference cumulative distribution function of image brightness at each channel. The second method proposed for automatic exposure control is an iterative method applicable for a broad range of imaging systems. It uses spectral sensitivity functions such as the photopic response functions for the generation of a spectral power image of the captured scene. A target image is then generated using the spectral power image by applying histogram equalization. The exposure time is hence calculated iteratively by minimizing the squared difference between target and the current spectral power image. Here we further analyze the method by performing its stability and controllability analysis using a state space representation used in control theory. The applicability of the proposed method for exposure time calculation was shown on real world scenes using cameras with varying architectures. Radiometric calibration is the estimate of the non-linear mapping of the input radiance map to the output brightness values. The radiometric mapping is represented by the camera response function with which the radiance map of the scene is estimated. Our radiometric calibration method employs an L1 cost function by taking advantage of Weisfeld optimization scheme. The proposed calibration works with multiple input images of the scene with varying exposure. It can also perform calibration using a single input with few constraints. The proposed method outperforms, quantitatively and qualitatively, various alternative methods found in the literature of radiometric calibration. Finally, to realistically represent the estimated radiance maps on low dynamic range display (LDR) devices, we propose a method for dynamic range compression. Radiance maps generally have higher dynamic range (HDR) as compared to the widely used display devices. Thus, for display purposes, dynamic range compression is required on HDR images. Our proposed method generates few LDR images from the HDR radiance map by clipping its values at different exposures. Using contrast information of each LDR image generated, the method uses an energy minimization approach to estimate the probability map of each LDR image. These probability maps are then used as label set to form final compressed dynamic range image for the display device. The results of our method were compared qualitatively and quantitatively with those produced by widely cited and professionally used methods

    Reconnaissance d'objets grâce à l'analyse des composantes couleur adaptées au changement d'éclairage entre deux images

    Get PDF
    Dans le domaine de l'indexation d'images, les méthodes de reconnaissance d'objets couleur ont tendance à échouer lorsque les conditions d'éclairage lors des acquisitions diffèrent d'une image à l'autre. Dans cet article, nous proposons une nouvelle approche pour la recherche d'objets dans des bases d'images couleur qui permet de s'affranchir des variations d'éclairage. Pour cela, nous considérons qu'un changement d'illuminant ne perturbe que très légèrement l'ordre des niveaux des composantes couleur des pixels d'une même image. Pour comparer deux images, nous transformons les composantes couleur de manière spécifique à chaque couple formé par une image-modèle et une image-requête. Les composantes couleur des pixels de chaque couple d'images considéré sont transformées par une analyse spécifique des mesures de rang des pixels. Des tests effectués sur une base publique d'images montrent l'amélioration obtenue par notre méthode en terme de reconnaissance d'objets
    • …
    corecore