570 research outputs found

    Performance of a Chromatic Adaptation Transform Based on Spectral Sharpening

    Get PDF
    The Bradford chromatic adaptation transform, empirically derived by Lam, models illumination change. Specifically, it pro-vides a means of mapping XYZs under a reference light source to XYZs for a target light source such that the corresponding XYZs produce the same perceived color. One implication of the Bradford chromatic adaptation transform is that color correction for illumination takes place not in cone space but rather in a ‘narrowed’ cone space. The Bradford sensors have their sensitivity more narrowly concentrated than the cones. However, Bradford sensors are not optimally narrow. Indeed, recent work has shown that it is possible to sharpen sensors to a much greater extent than Bradford. The focus of this paper is comparing the perceptual error between actual appearance and predicted appearance of a color under different illuminants, since it is perceptual error that the Bradford transform minimizes. Lam’s original experiments are revisited and perceptual per-formance of the Bradford transform and linearized Bradford transform is compared with that of a new adaptation transform that is based on sharp sensors. Perceptual errors in CIELAB delta E, delta ECIE94, and delta ECMC(1:1) are calculated for several corresponding color data sets and analyzed for their statistical significance. The results are found to be similar for the two transforms, with Bradford performing slightly better depending on the data set and color difference metric used. The sharp transform performs equally well as the linearized Bradford transform: there is no statistically significant difference in performance for most data sets

    The CIECAM02 color appearance model

    Get PDF
    The CIE Technical Committee 8-01, color appearance models for color management applications, has recently proposed a single set of revisions to the CIECAM97s color appearance model. This new model, called CIECAM02, is based on CIECAM97s but includes many revisions1-4 and some simplifications. A partial list of revisions includes a linear chromatic adaptation transform, a new non-linear response compression function and modifications to the calculations for the perceptual attribute correlates. The format of this paper is an annotated description of the forward equations for the model

    Spectral Visualization Sharpening

    Full text link
    In this paper, we propose a perceptually-guided visualization sharpening technique. We analyze the spectral behavior of an established comprehensive perceptual model to arrive at our approximated model based on an adapted weighting of the bandpass images from a Gaussian pyramid. The main benefit of this approximated model is its controllability and predictability for sharpening color-mapped visualizations. Our method can be integrated into any visualization tool as it adopts generic image-based post-processing, and it is intuitive and easy to use as viewing distance is the only parameter. Using highly diverse datasets, we show the usefulness of our method across a wide range of typical visualizations.Comment: Symposium of Applied Perception'1

    Computing Chromatic Adaptation

    Get PDF
    Most of today’s chromatic adaptation transforms (CATs) are based on a modified form of the von Kries chromatic adaptation model, which states that chromatic adaptation is an independent gain regulation of the three photoreceptors in the human visual system. However, modern CATs apply the scaling not in cone space, but use “sharper” sensors, i.e. sensors that have a narrower shape than cones. The recommended transforms currently in use are derived by minimizing perceptual error over experimentally obtained corresponding color data sets. We show that these sensors are still not optimally sharp. Using different computational approaches, we obtain sensors that are even more narrowband. In a first experiment, we derive a CAT by using spectral sharpening on Lam’s corresponding color data set. The resulting Sharp CAT, which minimizes XYZ errors, performs as well as the current most popular CATs when tested on several corresponding color data sets and evaluating perceptual error. Designing a spherical sampling technique, we can indeed show that these CAT sensors are not unique, and that there exist a large number of sensors that perform just as well as CAT02, the chromatic adaptation transform used in CIECAM02 and the ICC color management framework. We speculate that in order to make a final decision on a single CAT, we should consider secondary factors, such as their applicability in a color imaging workflow. We show that sharp sensors are very appropriate for color encodings, as they provide excellent gamut coverage and hue constancy. Finally, we derive sensors for a CAT that provide stable color ratios over different illuminants, i.e. that only model physical responses, which still can predict experimentally obtained appearance data. The resulting sensors are sharp

    Spectral Sharpening and the Bradford Transform

    Get PDF
    The Bradford chromatic adaptation transform, empirically derived by Lam, models illumination change. Specifically, it provides a means of mapping XYZs under a reference source to XYZs for a target light such that the corresponding XYZs produce the same perceived colour. One implication of the Bradford chromatic adaptation transform is that colour correction for illumination takes place not in cone space but rather in a ‘narrowed’ cone space. The Bradford sensors have their sensitivity more narrowly concentrated than the cones. However, Bradford sensors are not optimally narrow. Indeed, recent work has shown that it is possible to sharpen sensors to a much greater extent than Bradford. The focus of this paper is comparing the perceptual error between actual appearance and predicted appearance of a colour under different illuminants, since it is perceptual error that the Bradford transform minimizes. Lam’s original experiments are revisited and perceptual performance of the Bradford transform is compared with that of a new adaptation transform that is based on sharp sensors. Results were found to be similar for the two transforms. In terms of CIELAB error, Bradford performs slightly better. But in terms of the more accurate CIELAB 94 and CMC colour difference formulae, the sharp transform performs equally well: there is no statistically significant difference in performance

    Apport du contenu visuel Ă  l'adaptation chromatique

    Get PDF
    Les systÚmes de capture d'images tels que les scanners, les caméras et les appareils photos numériques, n'ont pas l'habilité à s'adapter dynamiquement au changement d'illumination comme le systÚme visuel humains. Ainsi, pour reproduire fidÚlement l'apparence d'une image couleur, les systÚmes de formation et de traitement d'images ont besoin d'appliquer une transformation qui convertit les couleurs capturées sous un illuminant d'entrée, vers des couleurs correspondantes sous un illuminant de sortie. Cette transformation est appelée, transformation pour l'adaptation chromatique, connue dans les étapes de formation physique d'image par la balance du blanc. L'adaptation chromatique est une transformation linéaire simple à implémenter. C'est un avantage qui la rend adaptée aux dispositifs à faible énergie, tel que les PDAs et les appareils photos numériques intégrés dans les téléphones portables. Dans ce mémoire, nous abordons l'adaptation chromatique d'un point de vue incluant le contenu visuel de la scÚne. Dans cette perspective, nous commençons par examiner l'influence de l'adaptation chromatique sur le contenu de l'image. Par la suite, nous proposons une reformulation mathématique de la transformation Sharp en se basant sur le contenu de l'image, et en incluant des contraintes liées à la structure du capteur, tel que le chevauchement entre réponses spectrales des différentes bandes, et la préservation du gamut du capteur

    Color Ratios and Chromatic Adaptation

    Get PDF
    In this paper, the performance of chromatic adaptation transforms based on stable color ratios is investigated.It was found that for three different sets of reflectance data, their performance was not statistically different from CMCCAT2000,when applying the chromatic adaptation transforms to Lam’s corresponding color data set and using a perceptual error metric of CIE Delta E94.The sensors with the best color ratio stability are much sharper and more de-correlated than the CMCCAT2000 sensors, corresponding better to sensor responses found in other psychovisual studies.The new sensors also closely match those used by the sharp adaptation transform

    Color-appearance modeling for cross-media image reproduction

    Get PDF
    Five color-appearance transforms were tested under a variety of conditions to determine which is best for producing CRT reproductions of original printed images. The transforms included: von Kries chromatic adaptation, CIELAB color space, RLAB color appearance model, Hunt\u27s color appearance model, and Nayatani\u27s color appearance model. It was found that RLAB produced the best matches for changes in white point, luminance level, and background changes, but did not accurately predict the effect of surround. The ability of CIELAB color space was equal to that of RLAB in many cases, and performed better for changes in surround. Expert observers generated CRT images in one viewing condition that they perceived to match an original image viewed in another condition. This technique produced images that were equal to or better than the best color appearance model tested and is a useful technique to generate color appearance data for developing new models and testing existing models

    Chromatic adaptation performance of different RGB sensors

    Get PDF
    Chromatic adaptation transforms are used in imaging system to map image appearance to colorimetry under different illumination sources. In this paper, the performance of different chromatic adaptation transforms (CAT) is compared with the performance of transforms based on RGB primaries that have been investigated in relation to standard color spaces for digital still camera characterization and image interchange. The chromatic adaptation transforms studied are von Kries, Bradford, Sharp, and CMCCAT2000. The RGB primaries investigated are ROMM, ITU-R BT.709, and 'prime wavelength' RGB. The chromatic adaptation model used is a von Kries model that linearly scales post-adaptation cone response with illuminant dependent coefficients. The transforms were evaluated using 16 sets of corresponding color dat. The actual and predicted tristimulus values were converted to CIELAB, and three different error prediction metrics, (Delta) ELab, (Delta) ECIE94, and (Delta) ECMC(1:1) were applied to the results. One-tail Student-t tests for matched pairs were calculated to compare if the variations in errors are statistically significant. For the given corresponding color data sets, the traditional chromatic adaptation transforms, Sharp CAT and CMCCAT2000, performed best. However, some transforms based on RGB primaries also exhibit good chromatic adaptation behavior, leading to the conclusion that white-point independent RGB spaces for image encoding can be defined. This conclusion holds only if the linear von Kries model is considered adequate to predict chromatic adaptation behavior

    Estimation of illuminants from color signals of illuminated objects

    Get PDF
    Color constancy is the ability of the human visual systems to discount the effect of the illumination and to assign approximate constant color descriptions to objects. This ability has long been studied and widely applied to many areas such as color reproduction and machine vision, especially with the development of digital color processing. This thesis work makes some improvements in illuminant estimation and computational color constancy based on the study and testing of existing algorithms. During recent years, it has been noticed that illuminant estimation based on gamut comparison is efficient and simple to implement. Although numerous investigations have been done in this field, there are still some deficiencies. A large part of this thesis has been work in the area of illuminant estimation through gamut comparison. Noting the importance of color lightness in gamut comparison, and also in order to simplify three-dimensional gamut calculation, a new illuminant estimation method is proposed through gamut comparison at separated lightness levels. Maximum color separation is a color constancy method which is based on the assumption that colors in a scene will obtain the largest gamut area under white illumination. The method was further derived and improved in this thesis to make it applicable and efficient. In addition, some intrinsic questions in gamut comparison methods, for example the relationship between the color space and the application of gamut or probability distribution, were investigated. Color constancy methods through spectral recovery have the limitation that there is no effective way to confine the range of object spectral reflectance. In this thesis, a new constraint on spectral reflectance based on the relative ratios of the parameters from principal component analysis (PCA) decomposition is proposed. The proposed constraint was applied to illuminant detection methods as a metric on the recovered spectral reflectance. Because of the importance of the sensor sensitivities and their wide variation, the influence from the sensor sensitivities on different kinds of illuminant estimation methods was also studied. Estimation method stability to wrong sensor information was tested, suggesting the possible solution to illuminant estimation on images with unknown sources. In addition, with the development of multi-channel imaging, some research on illuminant estimation for multi-channel images both on the correlated color temperature (CCT) estimation and the illuminant spectral recovery was performed in this thesis. All the improvement and new proposed methods in this thesis are tested and compared with those existing methods with best performance, both on synthetic data and real images. The comparison verified the high efficiency and implementation simplicity of the proposed methods
    • 

    corecore