4,676 research outputs found
Estimating varying illuminant colours in images
Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under
the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour
of the illuminant to apply a single correction to the entire image.
Itâs not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see
objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and
yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the
illuminant colours when more than one are present in a scene.
Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants
tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown
illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic
mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface
under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants.
Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy
algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldnât perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping
context and are able to further improve itâs performance.
The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow.
We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge.
We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets
Colour constancy using von Kries transformations: colour constancy "goes to the Lab"
Colour constancy algorithms aim at correcting colour towards a correct perception within
scenes. To achieve this goal they estimate a white point (the illuminant's colour), and correct
the scene for its in uence. In contrast, colour management performs on input images colour
transformations according to a pre-established input pro le (ICC pro le) for the given con-
stellation of input device (camera) and conditions (illumination situation). The latter case
presents a much more analytic approach (it is not based on an estimation), and is based on
solid colour science and current industry best practises, but it is rather in exible towards cases
with altered conditions or capturing devices. The idea as outlined in this paper is to take up
the idea of working on visually linearised and device independent CIE colour spaces as used
in colour management, and to try to apply them in the eld of colour constancy. For this
purpose two of the most well known colour constancy algorithms White Patch Retinex and
Grey World Assumption have been ported to also work on colours in the CIE LAB colour
space. Barnard's popular benchmarking set of imagery was corrected with the original imple-
mentations as a reference and the modi ed algorithms. The results appeared to be promising,
but they also revealed strengths and weaknesses
Object knowledge modulates colour appearance
We investigated the memory colour effect for colour diagnostic artificial objects. Since knowledge about these objects and their colours has been learned in everyday life, these stimuli allow the investigation of the influence of acquired object knowledge on colour appearance. These investigations are relevant for questions about how object and colour information in high-level vision interact as well as for research about the influence of learning and experience on perception in general. In order to identify suitable artificial objects, we developed a reaction time paradigm that measures (subjective) colour diagnosticity. In the main experiment, participants adjusted sixteen such objects to their typical colour as well as to grey. If the achromatic object appears in its typical colour, then participants should adjust it to the opponent colour in order to subjectively perceive it as grey. We found that knowledge about the typical colour influences the colour appearance of artificial objects. This effect was particularly strong along the daylight axis
Extending minkowski norm illuminant estimation
The ability to obtain colour images invariant to changes of illumination is called colour
constancy. An algorithm for colour constancy takes sensor responses - digital images
- as input, estimates the ambient light and returns a corrected image in which the illuminant
influence over the colours has been removed. In this thesis we investigate the
step of illuminant estimation for colour constancy and aim to extend the state of the art
in this field.
We first revisit the Minkowski Family Norm framework for illuminant estimation.
Because, of all the simple statistical approaches, it is the most general formulation and,
crucially, delivers the best results. This thesis makes four technical contributions. First,
we reformulate the Minkowski approach to provide better estimation when a constraint
on illumination is employed. Second, we show how the method can (by orders of
magnitude) be implemented to run much faster than previous algorithms. Third, we
show how a simple edge based variant delivers improved estimation compared with the
state of the art across many datasets. In contradistinction to the prior state of the art our
definition of edges is fixed (a simple combination of first and second derivatives) i.e.
we do not tune our algorithm to particular image datasets. This performance is further
improved by incorporating a gamut constraint on surface colour -our 4th contribution.
The thesis finishes by considering our approach in the context of a recent OSA
competition run to benchmark computational algorithms operating on physiologically
relevant cone based input data. Here we find that Constrained Minkowski Norms operi
ii
ating on spectrally sharpened cone sensors (linear combinations of the cones that behave
more like camera sensors) supports competition leading illuminant estimation
Computer mediated colour fidelity and communication
Developments in technology have meant that computercontrolled
imaging devices are becoming more powerful and more
affordable. Despite their increasing prevalence, computer-aided
design and desktop publishing software has failed to keep pace, leading
to disappointing colour reproduction across different devices.
Although there has been a recent drive to incorporate colour management
functionality into modern computer systems, in general this
is limited in scope and fails to properly consider the way in which
colours are perceived. Furthermore, differences in viewing conditions
or representation severely impede the communication of colour
between groups of users.
The approach proposed here is to provide WYSIWYG colour
across a range of imaging devices through a combination of existing
device characterisation and colour appearance modeling techniques.
In addition, to further facilitate colour communication, various common
colour notation systems are defined by a series of mathematical
mappings. This enables both the implementation of computer-based
colour atlases (which have a number of practical advantages over
physical specifiers) and also the interrelation of colour represented in
hitherto incompatible notations.
Together with the proposed solution, details are given of a computer
system which has been implemented. The system was used by
textile designers for a real task. Prior to undertaking this work,
designers were interviewed in order to ascertain where colour played
an important role in their work and where it was found to be a problem.
A summary of the findings of these interviews together with a
survey of existing approaches to the problems of colour fidelity and
communication in colour computer systems are also given. As background
to this work, the topics of colour science and colour imaging
are introduced
Investigations into colour constancy by bridging human and computer colour vision
PhD ThesisThe mechanism of colour constancy within the human visual system has long been of great interest to researchers within the psychophysical and image processing communities. With the maturation of colour imaging techniques for both scientific and artistic applications the importance of colour capture accuracy has consistently increased. Colour offers a great deal more information for the viewer than grayscale imagery, ranging from object detection to food ripeness and health estimation amongst many others.
However these tasks rely upon the colour constancy process in order to discount scene illumination to allow these tasks to be carried out. Psychophysical studies have attempted to uncover the inner workings of this mechanism, which would allow it to be reproduced algorithmically. This would allow the development of devices which can eventually capture and perceive colour in the same manner as a human viewer.
These two communities have approached this challenge from opposite ends, and as such very different and largely unconnected approaches. This thesis investigates the development of studies and algorithms which bridge the two communities. Utilising findings from psychophysical studies as inspiration to firstly improve an existing image enhancement algorithm. Results are then compared to state of the art methods. Then, using further knowledge, and inspiration, of the human visual system to develop a novel colour constancy approach. This approach attempts to mimic and replicate the mechanism of colour constancy by investigating the use of a physiological colour space and specific scene contents to estimate illumination. Performance of the colour constancy mechanism within the visual system is then also investigated. The performance of the mechanism across different scenes and commonly and uncommonly encountered illuminations is tested.
The importance of being able to bridge these two communities, with a successful colour constancy method, is then further illustrated with a case study investigating the human visual perception of the agricultural produce of tomatoes.EPSRC DTA:
Institute of Neuroscience, Newcastle University
- âŠ