691 research outputs found

    Color Constancy Using CNNs

    Full text link
    In this work we describe a Convolutional Neural Network (CNN) to accurately predict the scene illumination. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max pooling, one fully connected layer and three output nodes. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating scene illumination. This approach achieves state-of-the-art performance on a standard dataset of RAW images. Preliminary experiments on images with spatially varying illumination demonstrate the stability of the local illuminant estimation ability of our CNN.Comment: Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR 2015 workshop

    Colour constancy using von Kries transformations: colour constancy "goes to the Lab"

    Get PDF
    Colour constancy algorithms aim at correcting colour towards a correct perception within scenes. To achieve this goal they estimate a white point (the illuminant's colour), and correct the scene for its in uence. In contrast, colour management performs on input images colour transformations according to a pre-established input pro le (ICC pro le) for the given con- stellation of input device (camera) and conditions (illumination situation). The latter case presents a much more analytic approach (it is not based on an estimation), and is based on solid colour science and current industry best practises, but it is rather in exible towards cases with altered conditions or capturing devices. The idea as outlined in this paper is to take up the idea of working on visually linearised and device independent CIE colour spaces as used in colour management, and to try to apply them in the eld of colour constancy. For this purpose two of the most well known colour constancy algorithms White Patch Retinex and Grey World Assumption have been ported to also work on colours in the CIE LAB colour space. Barnard's popular benchmarking set of imagery was corrected with the original imple- mentations as a reference and the modi ed algorithms. The results appeared to be promising, but they also revealed strengths and weaknesses

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM

    A Hybrid Strategy for Illuminant Estimation Targeting Hard Images

    Get PDF
    Illumination estimation is a well-studied topic in computer vision. Early work reported performance on benchmark datasets using simple statistical aggregates such as mean or median error. Recently, it has become accepted to report a wider range of statistics, e.g. top 25%, mean, and bottom 25% performance. While these additional statistics are more informative, their relationship across different methods is unclear. In this paper, we analyse the results of a number of methods to see if there exist ‘hard’ images that are challenging for multiple methods. Our findings indicate that there are certain images that are difficult for fast statistical-based methods, but that can be handled with more complex learning-based approaches at a significant cost in time-complexity. This has led us to design a hybrid method that first classifies an image as ‘hard’ or ‘easy’ and then uses the slower method when needed, thus providing a balance between time-complexity and performance. In addition, we have identified dataset images that almost no method is able to process. We argue, however, that these images have problems with how the ground truth is established and recommend their removal from future performance evaluation
    • …
    corecore