251 research outputs found

    Investigations into colour constancy by bridging human and computer colour vision

    Get PDF
    PhD ThesisThe mechanism of colour constancy within the human visual system has long been of great interest to researchers within the psychophysical and image processing communities. With the maturation of colour imaging techniques for both scientific and artistic applications the importance of colour capture accuracy has consistently increased. Colour offers a great deal more information for the viewer than grayscale imagery, ranging from object detection to food ripeness and health estimation amongst many others. However these tasks rely upon the colour constancy process in order to discount scene illumination to allow these tasks to be carried out. Psychophysical studies have attempted to uncover the inner workings of this mechanism, which would allow it to be reproduced algorithmically. This would allow the development of devices which can eventually capture and perceive colour in the same manner as a human viewer. These two communities have approached this challenge from opposite ends, and as such very different and largely unconnected approaches. This thesis investigates the development of studies and algorithms which bridge the two communities. Utilising findings from psychophysical studies as inspiration to firstly improve an existing image enhancement algorithm. Results are then compared to state of the art methods. Then, using further knowledge, and inspiration, of the human visual system to develop a novel colour constancy approach. This approach attempts to mimic and replicate the mechanism of colour constancy by investigating the use of a physiological colour space and specific scene contents to estimate illumination. Performance of the colour constancy mechanism within the visual system is then also investigated. The performance of the mechanism across different scenes and commonly and uncommonly encountered illuminations is tested. The importance of being able to bridge these two communities, with a successful colour constancy method, is then further illustrated with a case study investigating the human visual perception of the agricultural produce of tomatoes.EPSRC DTA: Institute of Neuroscience, Newcastle University

    Multispectral photography for earth resources

    Get PDF
    A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning

    A Light Source Calibration Technique for Multi-camera Inspection Devices

    Get PDF
    Industrial manufacturing processes often involve a visual control system to detect possible product defects during production. Such inspection devices usually include one or more cameras and several light sources designed to highlight surface imperfections under different illumination conditions (e.g. bumps, scratches, holes). In such scenarios, a preliminary calibration procedure of each component is a mandatory step to recover the system’s geometrical configuration and thus ensure a good process accuracy. In this paper we propose a procedure to estimate the position of each light source with respect to a camera network using an inexpensive Lambertian spherical target. For each light source, the target is acquired at different positions from different cameras, and an initial guess of the corresponding light vector is recovered from the analysis of the collected intensity isocurves. Then, an energy minimization process based on the Lambertian shading model refines the result for a pr ecise 3D localization. We tested our approach in an industrial setup, performing extensive experiments on synthetic and real-world data to demonstrate the accuracy of the proposed approach

    Estimating varying illuminant colours in images

    Get PDF
    Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour of the illuminant to apply a single correction to the entire image. It’s not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the illuminant colours when more than one are present in a scene. Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants. Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldn’t perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping context and are able to further improve it’s performance. The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow. We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge. We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets

    Colour Constancy: Cues, Priors and Development

    Get PDF
    Colour is crucial for detecting, recognising, and interacting with objects. However, the reflected wavelength of light ("colour") varies vastly depending on the illumination. Whilst adults can judge colours as relatively invariant under changing illuminations (colour constancy), much remains unknown, which this thesis aims to resolve. Firstly, previous studies have shown adults can use certain cues to estimate surface colour. However, one proposed cue - specular highlights - has been little researched so this is explored here. Secondly, the existing data on a daylight prior for colour constancy remain inconclusive so we aimed to further investigate this. Finally, no studies have investigated the development of colour constancy during childhood so the third aim is to determine at what age colour constancy becomes adult-like. In the introduction, existing research is discussed, including cues to the illuminant, daylight priors, and the development of perceptual constancies. The second chapter contains three experiments conducted to determine whether adults can use a specular highlight cue and/ or daylight prior to aid colour constancy. Results showed adults can use specular highlights when other cues are weakened. Evidence for a daylight prior was weak. In the third chapter the development of colour constancy during childhood was investigated by developing a novel child-friendly task. Children had higher constancy than adults, and evidence for a daylight prior was mixed. The final experimental chapter used the task developed in Chapter 3 to ask whether children can use specular highlights as a cue for colour constancy. Testing was halted early due to the coronavirus pandemic, yet the data obtained suggest that children are negatively impacted by specular highlights. Finally, in the general discussion, the results of the six experiments are brought together to draw conclusions regarding the use of cues and priors, and the development of colour constancy. Implications and future directions for research are discussed

    Extended Intensity Range Imaging

    Get PDF
    A single composite image with an extended intensive range is generated by combining disjoining regions from different images of the same scene. The set of images is obtained with a charge-couple device (CCD) set for different flux integration times. By limiting differences in the integration times so that the ranges of output pixel values overlap considerably, individual pixels are assigned the value measured at each spatial location that is in the most sensitive range where the values are both below saturation and are most precisely specified. Integration times are lengthened geometrically from a minimum where all pixel values are below saturation until all dark regions emerge from the lowest quantization level. the method is applied to an example scene and the effect the composite images have on traditional low-level imaging methods also is examined

    Single view reflectance capture using multiplexed scattering and time-of-flight imaging

    Get PDF
    This paper introduces the concept of time-of-flight reflectance estimation, and demonstrates a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single view-point, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-flight camera. The configuration collectively acquires dense angular, but low spatial sampling, within a limited solid angle range - all from a single viewpoint. Our ultra-fast imaging approach captures space-time "streak images" that can separate out different bounces of light based on path length. Entanglements arise in the streak images mixing signals from multiple paths if they have the same total path length. We show how reflectances can be recovered by solving for a linear system of equations and assuming parametric material models; fitting to lower dimensional reflectance models enables us to disentangle measurements. We demonstrate proof-of-concept results of parametric reflectance models for homogeneous and discretized heterogeneous patches, both using simulation and experimental hardware. As compared to lengthy or highly calibrated BRDF acquisition techniques, we demonstrate a device that can rapidly, on the order of seconds, capture meaningful reflectance information. We expect hardware advances to improve the portability and speed of this device.National Science Foundation (U.S.) (Award CCF-0644175)National Science Foundation (U.S.) (Award CCF-0811680)National Science Foundation (U.S.) (Award IIS-1011919)Intel Corporation (PhD Fellowship)Alfred P. Sloan Foundation (Research Fellowship

    Colour constancy in simple and complex scenes

    Get PDF
    PhD ThesisColour constancy is defined as the ability to perceive the surface colours of objects within scenes as approximately constant through changes in scene illumination. Colour constancy in real life functions so seamlessly that most people do not realise that the colour of the light emanating from an object can change markedly throughout the day. Constancy measurements made in simple scenes constructed from flat coloured patches do not produce constancy of this high degree. The question that must be asked is: what are the features of everyday scenes that improve constancy? A novel technique is presented for testing colour constancy. Results are presented showing measurements of constancy in simple and complex scenes. More specifically, matching experiments are performed for patches against uniform and multi-patch backgrounds, the latter of which provide colour contrast. Objects created by the addition of shape and 3-D shading information are also matched against backgrounds consisting of matte reflecting patches. In the final set of experiments observers match detailed depictions of objects - rich in chromatic contrast, shading, mutual illumination and other real life features - within depictions of real life scenes. The results show similar performance across the conditions that contain chromatic contrast, although some uncertainty still remains as to whether the results are indicative of human colour constancy performance or to sensory match capabilities. An interesting division exists between patch matches performed against uniform and multi-patch backgrounds that is manifested as a shift in CIE xy space. A simple model of early chromatic processes is proposed and examined in the context of the results
    • …
    corecore