102 research outputs found
A Hybrid Strategy for Illuminant Estimation Targeting Hard Images
Illumination estimation is a well-studied topic in computer vision. Early work reported performance on benchmark datasets using simple statistical aggregates such as mean or median error. Recently, it has become accepted to report a wider range of statistics, e.g. top 25%, mean, and bottom 25% performance. While these additional statistics are more informative, their relationship across different methods is unclear. In this paper, we analyse the results of a number of methods to see if there exist ‘hard’ images that are challenging for multiple methods. Our findings indicate that there are certain images that are difficult for fast statistical-based methods, but that can be handled with more complex learning-based approaches at a significant cost in time-complexity. This has led us to design a hybrid method that first classifies an image as ‘hard’ or ‘easy’ and then uses the slower method when needed, thus providing a balance between time-complexity and performance. In addition, we have identified dataset images that almost no method is able to process. We argue, however, that these images have problems with how the ground truth is established and recommend their removal from future performance evaluation
Colour Constancy: Biologically-inspired Contrast Variant Pooling Mechanism
Pooling is a ubiquitous operation in image processing algorithms that allows
for higher-level processes to collect relevant low-level features from a region
of interest. Currently, max-pooling is one of the most commonly used operators
in the computational literature. However, it can lack robustness to outliers
due to the fact that it relies merely on the peak of a function. Pooling
mechanisms are also present in the primate visual cortex where neurons of
higher cortical areas pool signals from lower ones. The receptive fields of
these neurons have been shown to vary according to the contrast by aggregating
signals over a larger region in the presence of low contrast stimuli. We
hypothesise that this contrast-variant-pooling mechanism can address some of
the shortcomings of max-pooling. We modelled this contrast variation through a
histogram clipping in which the percentage of pooled signal is inversely
proportional to the local contrast of an image. We tested our hypothesis by
applying it to the phenomenon of colour constancy where a number of popular
algorithms utilise a max-pooling step (e.g. White-Patch, Grey-Edge and
Double-Opponency). For each of these methods, we investigated the consequences
of replacing their original max-pooling by the proposed
contrast-variant-pooling. Our experiments on three colour constancy benchmark
datasets suggest that previous results can significantly improve by adopting a
contrast-variant-pooling mechanism
Color Constancy Using CNNs
In this work we describe a Convolutional Neural Network (CNN) to accurately
predict the scene illumination. Taking image patches as input, the CNN works in
the spatial domain without using hand-crafted features that are employed by
most previous methods. The network consists of one convolutional layer with max
pooling, one fully connected layer and three output nodes. Within the network
structure, feature learning and regression are integrated into one optimization
process, which leads to a more effective model for estimating scene
illumination. This approach achieves state-of-the-art performance on a standard
dataset of RAW images. Preliminary experiments on images with spatially varying
illumination demonstrate the stability of the local illuminant estimation
ability of our CNN.Comment: Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR
2015 workshop
Color Homography Color Correction
Homographies -- a mathematical formalism for relating image points across different camera viewpoints -- are at the foundations of geometric methods in computer vision and are used in geometric camera calibration, image registration, and stereo vision and other tasks. In this paper, we show the surprising result that colors across a change in viewing condition (changing light color, shading and camera) are also related by a homography. We propose a new color correction method based on color homography. Experiments demonstrate that solving the color homography problem leads to more accurate calibration
Convolutional Color Constancy
Color constancy is the problem of inferring the color of the light that
illuminated a scene, usually so that the illumination color can be removed.
Because this problem is underconstrained, it is often solved by modeling the
statistical regularities of the colors of natural objects and illumination. In
contrast, in this paper we reformulate the problem of color constancy as a 2D
spatial localization task in a log-chrominance space, thereby allowing us to
apply techniques from object detection and structured prediction to the color
constancy problem. By directly learning how to discriminate between correctly
white-balanced images and poorly white-balanced images, our model is able to
improve performance on standard benchmarks by nearly 40%
Immediate colour constancy
Colour constancy is traditionally interpreted as the stable appearance of the colour of a surface despite changes in the spectral composition of the illumination. When colour constancy has been assessed quantitatively, however, by observers making matches between surfaces illuminated by different sources, its completeness has been found to be poor. An alternative operational approach to colour constancy may be taken which concentrates instead on detecting the underlying chromatic relationship between the parts of a surface under changes in the illuminant. Experimentally the observer's task was to determine whether a change in the appearance of a surface was due to a change in its reflecting properties or to a change in the incident light. Observers viewed computer simulations of a row of three Mondrian patterns of Munsell chips. The centre pattern was a reference pattern illuminated by a simulated, spatially uniform daylight; one of the outer patterns was identical but illuminated by a different daylight; and the other outer pattern was equivalent but not obtainable from the centre pattern by such a change in illuminant. Different patterns and different shifts in daylight were generated in each experimental trial. The task of the observer was to identify which of the outer patterns was the result of an illuminant change. Observers made reliable discriminations of the patterns with displays of durations from several seconds to less than 200 ms, and, for one observer, with displays of 1 ms. By these measures, human observers appear capable of colour constancy that is extremely rapid, and probably preattentive in origin
- …