4,386 research outputs found

    A Hybrid Strategy for Illuminant Estimation Targeting Hard Images

    Get PDF
    Illumination estimation is a well-studied topic in computer vision. Early work reported performance on benchmark datasets using simple statistical aggregates such as mean or median error. Recently, it has become accepted to report a wider range of statistics, e.g. top 25%, mean, and bottom 25% performance. While these additional statistics are more informative, their relationship across different methods is unclear. In this paper, we analyse the results of a number of methods to see if there exist ‘hard’ images that are challenging for multiple methods. Our findings indicate that there are certain images that are difficult for fast statistical-based methods, but that can be handled with more complex learning-based approaches at a significant cost in time-complexity. This has led us to design a hybrid method that first classifies an image as ‘hard’ or ‘easy’ and then uses the slower method when needed, thus providing a balance between time-complexity and performance. In addition, we have identified dataset images that almost no method is able to process. We argue, however, that these images have problems with how the ground truth is established and recommend their removal from future performance evaluation

    Color Constancy Using CNNs

    Full text link
    In this work we describe a Convolutional Neural Network (CNN) to accurately predict the scene illumination. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max pooling, one fully connected layer and three output nodes. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating scene illumination. This approach achieves state-of-the-art performance on a standard dataset of RAW images. Preliminary experiments on images with spatially varying illumination demonstrate the stability of the local illuminant estimation ability of our CNN.Comment: Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR 2015 workshop

    Colour constancy using von Kries transformations: colour constancy "goes to the Lab"

    Get PDF
    Colour constancy algorithms aim at correcting colour towards a correct perception within scenes. To achieve this goal they estimate a white point (the illuminant's colour), and correct the scene for its in uence. In contrast, colour management performs on input images colour transformations according to a pre-established input pro le (ICC pro le) for the given con- stellation of input device (camera) and conditions (illumination situation). The latter case presents a much more analytic approach (it is not based on an estimation), and is based on solid colour science and current industry best practises, but it is rather in exible towards cases with altered conditions or capturing devices. The idea as outlined in this paper is to take up the idea of working on visually linearised and device independent CIE colour spaces as used in colour management, and to try to apply them in the eld of colour constancy. For this purpose two of the most well known colour constancy algorithms White Patch Retinex and Grey World Assumption have been ported to also work on colours in the CIE LAB colour space. Barnard's popular benchmarking set of imagery was corrected with the original imple- mentations as a reference and the modi ed algorithms. The results appeared to be promising, but they also revealed strengths and weaknesses

    Convolutional Color Constancy

    Full text link
    Color constancy is the problem of inferring the color of the light that illuminated a scene, usually so that the illumination color can be removed. Because this problem is underconstrained, it is often solved by modeling the statistical regularities of the colors of natural objects and illumination. In contrast, in this paper we reformulate the problem of color constancy as a 2D spatial localization task in a log-chrominance space, thereby allowing us to apply techniques from object detection and structured prediction to the color constancy problem. By directly learning how to discriminate between correctly white-balanced images and poorly white-balanced images, our model is able to improve performance on standard benchmarks by nearly 40%

    Design of Novel Algorithm and Architecture for Gaussian Based Color Image Enhancement System for Real Time Applications

    Full text link
    This paper presents the development of a new algorithm for Gaussian based color image enhancement system. The algorithm has been designed into architecture suitable for FPGA/ASIC implementation. The color image enhancement is achieved by first convolving an original image with a Gaussian kernel since Gaussian distribution is a point spread function which smoothen the image. Further, logarithm-domain processing and gain/offset corrections are employed in order to enhance and translate pixels into the display range of 0 to 255. The proposed algorithm not only provides better dynamic range compression and color rendition effect but also achieves color constancy in an image. The design exploits high degrees of pipelining and parallel processing to achieve real time performance. The design has been realized by RTL compliant Verilog coding and fits into a single FPGA with a gate count utilization of 321,804. The proposed method is implemented using Xilinx Virtex-II Pro XC2VP40-7FF1148 FPGA device and is capable of processing high resolution color motion pictures of sizes of up to 1600x1200 pixels at the real time video rate of 116 frames per second. This shows that the proposed design would work for not only still images but also for high resolution video sequences.Comment: 15 pages, 15 figure
    • …
    corecore