12 research outputs found

    Illuminant retrieval for fixed location cameras

    Get PDF
    Fixed location cameras, such as panoramic cameras or surveillance cameras, are very common. In images taken with these cameras, there will be changes in lighting and dynamic image content, but there will also be constant objects in the background. We propose to solve for color constancy in this framework. We use a set of images to recover the scenes’ illuminants using only a few surfaces present in the scene. Our method retrieves the illuminant in every image by minimizing the difference between the reflectance spectra of the redundant elements’ surfaces or, more precisely, between their corresponding sensor response values. It is assumed that these spectra are constant across images taken under different illuminants. We also recover an estimate of the reflectance spectra of the selected elements. Experiments on synthetic and real images validate our method

    Daylight illuminant retrieval using redundant image elements

    Get PDF
    We present a method for retrieving illuminant spectra from a set of images taken with a fixed location camera, such as a surveillance or panoramic one. In these images, there will be significant changes in lighting conditions and scene content, but there will also be static elements in the background. As color constancy is an under-determined problem, we propose to exploit the redundancy and constancy offered by the static image elements to reduce the dimensionality of the problem. Specifically, we assume that the reflectance properties of these objects remain constant across the images taken with a given fixed camera. We demonstrate that we can retrieve illuminant and reflectance spectra in this framework by modeling the redundant image elements as a set of synthetic RGB patches. We define an error function that takes the RGB patches and a set of test illuminants as input and returns a similarity measure of the redundant surfaces reflectances. The test illuminants are then varied until the error function is minimized, returning the illuminants under which each image in the set was captured. This is achieved by gradient descent, providing an optimization method that is robust to shot noise

    Color matching functions for a perceptually uniform RGB space

    Get PDF
    We present methods to estimate perceptual uniformity of color spaces and to derive a perceptually uniform RGB space using geometrical criteria defined in a logarithmic opponent color representation

    Influence of Spectral Sensitivity Functions on color demosaicing

    Get PDF
    Color images acquired through single chip digital cameras using a color filter array (CFA) contain a mixture of luminance and opponent chromatic information that share their representation in the spatial Fourier spectrum. This mixture could result in aliasing if the bandwidths of these signals are too wide and their spectra overlap. In such a case, reconstructing three-color per pixel images without error is impossible. One way to improve the reconstruction is to have sensitivity functions that are highly correlated, reducing the bandwidth of the opponent chromatic components. However, this diminishes the ability to reproduce colors accurately as noise is amplified when converting an image to the final color encoding. In this paper, we are looking for an optimum between accurate image reconstruction through demosaicing and accurate color rendering. We design a camera simulation, first using a hyperspectral model of random color images and a demosaicing algorithm based on frequency selection. We find that there is an optimum and confirm our results using a natural hyperspectral image

    Color correction of uncalibrated images for the classification of human skin color

    Get PDF
    Images of a scene captured with multiple cameras will have different color values due to variations in capture and color rendering across devices. We present a method to accurately retrieve color information from uncalibrated images taken under uncontrolled lighting conditions with an unknown device and no access to raw data, but with a limited number of reference colors in the scene. The method is used to assess skin tones. A subject is imaged with the calibration target in the scene. This target is extracted and its color values are used to compute a color correction transform that is applied to the entire image. We establish that the best mapping is done using a target consisting of skin colored patches representing a range of human skin colors. We show that color information extracted from images is well correlated with color data derived from spectral measurements of skin. We also show that skin color can be consistently measured across cameras with different color rendering and resolutions ranging from 0.1 Mpixels to 4.0 Mpixels

    Cell Phones as Imaging Sensors

    Get PDF
    Camera phones are ubiquitous, and consumers have been adopting them faster than any other technology in modern history. When connected to a network, though, they are capable of more than just picture taking: Suddenly, they gain access to the power of the cloud. We exploit this capability by providing a series of image-based personal advisory services. These are designed to work with any handset over any cellular carrier using commonly available Multimedia Messaging Service (MMS) and Short Message Service (SMS) features. Targeted at the unsophisticated consumer, these applications must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system (i.e., as a cloud service) and not on the handset itself. Presenting an image to an advisory service in the cloud, a user receives information that can be acted upon immediately. Two of our examples involve color assessment – selecting cosmetics and home décor paint palettes; the third provides the ability to extract text from a scene. In the case of the color imaging applications, we have shown that our service rivals the advice quality of experts. The result of this capability is a new paradigm for mobile interactions — image-based information services exploiting the ubiquity of camera phones
    corecore