437 research outputs found
Color Constancy Using CNNs
In this work we describe a Convolutional Neural Network (CNN) to accurately
predict the scene illumination. Taking image patches as input, the CNN works in
the spatial domain without using hand-crafted features that are employed by
most previous methods. The network consists of one convolutional layer with max
pooling, one fully connected layer and three output nodes. Within the network
structure, feature learning and regression are integrated into one optimization
process, which leads to a more effective model for estimating scene
illumination. This approach achieves state-of-the-art performance on a standard
dataset of RAW images. Preliminary experiments on images with spatially varying
illumination demonstrate the stability of the local illuminant estimation
ability of our CNN.Comment: Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR
2015 workshop
Color Constancy Convolutional Autoencoder
In this paper, we study the importance of pre-training for the generalization
capability in the color constancy problem. We propose two novel approaches
based on convolutional autoencoders: an unsupervised pre-training algorithm
using a fine-tuned encoder and a semi-supervised pre-training algorithm using a
novel composite-loss function. This enables us to solve the data scarcity
problem and achieve competitive, to the state-of-the-art, results while
requiring much fewer parameters on ColorChecker RECommended dataset. We further
study the over-fitting phenomenon on the recently introduced version of
INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both
field and non-field scenes acquired by three different camera models.Comment: 6 pages, 1 figure, 3 table
Convolutional Color Constancy
Color constancy is the problem of inferring the color of the light that
illuminated a scene, usually so that the illumination color can be removed.
Because this problem is underconstrained, it is often solved by modeling the
statistical regularities of the colors of natural objects and illumination. In
contrast, in this paper we reformulate the problem of color constancy as a 2D
spatial localization task in a log-chrominance space, thereby allowing us to
apply techniques from object detection and structured prediction to the color
constancy problem. By directly learning how to discriminate between correctly
white-balanced images and poorly white-balanced images, our model is able to
improve performance on standard benchmarks by nearly 40%
Estimating varying illuminant colours in images
Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under
the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour
of the illuminant to apply a single correction to the entire image.
It’s not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see
objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and
yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the
illuminant colours when more than one are present in a scene.
Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants
tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown
illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic
mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface
under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants.
Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy
algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldn’t perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping
context and are able to further improve it’s performance.
The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow.
We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge.
We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets
Colour Constancy: Biologically-inspired Contrast Variant Pooling Mechanism
Pooling is a ubiquitous operation in image processing algorithms that allows
for higher-level processes to collect relevant low-level features from a region
of interest. Currently, max-pooling is one of the most commonly used operators
in the computational literature. However, it can lack robustness to outliers
due to the fact that it relies merely on the peak of a function. Pooling
mechanisms are also present in the primate visual cortex where neurons of
higher cortical areas pool signals from lower ones. The receptive fields of
these neurons have been shown to vary according to the contrast by aggregating
signals over a larger region in the presence of low contrast stimuli. We
hypothesise that this contrast-variant-pooling mechanism can address some of
the shortcomings of max-pooling. We modelled this contrast variation through a
histogram clipping in which the percentage of pooled signal is inversely
proportional to the local contrast of an image. We tested our hypothesis by
applying it to the phenomenon of colour constancy where a number of popular
algorithms utilise a max-pooling step (e.g. White-Patch, Grey-Edge and
Double-Opponency). For each of these methods, we investigated the consequences
of replacing their original max-pooling by the proposed
contrast-variant-pooling. Our experiments on three colour constancy benchmark
datasets suggest that previous results can significantly improve by adopting a
contrast-variant-pooling mechanism
- …