369,862 research outputs found
Status of CIE Color Appearance Models
In meetings just prior to the 1997 AIC Congress in Kyoto, CIE TC1-37, chaired by M. Fairchild, established the CIE 1997 Interim Colour Appearance Model (Simple Version), known as CIECAM97s. CIECAM97s was formally published in 1998 in CIE publication 131. CIE TC1-37 was dissolved shortly after publication of CIECAM97s at which time, a reportership, R1-24 held by M. Fairchild, was established to monitor ongoing developments in color appearance modeling and notify CIE Division 1 if it became necessary to form a new TC to consider revision or replacement of CIECAM97s. In the four years between AIC Congresses, there has been much activity, both by individual researchers and within the CIE, aimed at furthering our understanding of color appearance models and deriving improved models for consideration. The aim of this paper is to summarize these activities, report on the current status of CIE efforts on color appearance models, and suggest what the future might hold for CIE color appearance models
Meet iCAM: A next-generation color appearance model
For over 20 years, color appearance models have evolved to the point of international standardization. These models are capable of predicting the appearance of spatially-simple color stimuli under a wide variety viewing conditions and have been applied to images by treating each pixel as an independent stimulus. It has been more recently recognized that revolutionary advances in color appearance modeling would require more rigorous treatment of spatial (and perhaps temporal) appearance phenomena. In addition, color appearance models are often more complex than warranted by the available visual data and limitations in the accuracy and precision of practical viewing conditions. Lastly, issues of color difference measurement are typically treated separate from color appearance. Thus, the stage has been set for a new generation of color appearance models. This paper presents one such model called iCAM, for image color appearance model. The objectives in formulating iCAM were to simultaneously provide traditional color appearance capabilities, spatial vision attributes, and color difference metrics, in a model simple enough for practical applications. The framework and initial implementation of the model are presented along with examples that illustrate its performance for chromatic adaptation, appearance scales, color difference, crispening, spreading, high-dynamic-range tone mapping, and image quality measurement. It is expected that the implementation of this model framework will be refined in the coming years as new data become available
Comparing color appearance models using pictorial images
Eight different color appearance models were tested using pictorial images. A psychophysical paired comparison experiment was performed where 30 color-normal observers judged reference and test images via successive-Ganzfeld haploscopic viewing such that each eye maintained constant chromatic adaptation and inter-ocular interactions were minimized. It was found that models based on von Kries had best performance, specifically CIELAB, HUNT, RLAB, and von Kries
Color-appearance modeling for cross-media image reproduction
Five color-appearance transforms were tested under a variety of conditions to determine which is best for producing CRT reproductions of original printed images. The transforms included: von Kries chromatic adaptation, CIELAB color space, RLAB color appearance model, Hunt\u27s color appearance model, and Nayatani\u27s color appearance model. It was found that RLAB produced the best matches for changes in white point, luminance level, and background changes, but did not accurately predict the effect of surround. The ability of CIELAB color space was equal to that of RLAB in many cases, and performed better for changes in surround. Expert observers generated CRT images in one viewing condition that they perceived to match an original image viewed in another condition. This technique produced images that were equal to or better than the best color appearance model tested and is a useful technique to generate color appearance data for developing new models and testing existing models
Impact of chromophores on colour appearance in a computational skin model
Early diagnosis of skin cancer offers the patient more favorable treatment options. Color fidelity of skin images is a major concern for dermatologists as adoption of digital dermatoscopes is increasing rapidly. Accurate color depiction of the lesion and surrounding skin are vital in diagnostic evaluation of a lesion.
We previously introduced VCT-Derma, a pipeline for dermatological Virtual Clinical Trials (VCTs) including detailed and flexible models of human skin and lesions, which represent the patient in the entire dermatoscopy-based diagnostic process. However, those initial models of skin and lesions did not properly account for tissue colors.
Our new skin model accounts for tissue color appearance by incorporating chromophores (e.g., melanin, blood) into the tissue model, and simulating the optical properties of the various skin layers. The physical properties of the skin and lesion were selected from clinically plausible values. The model and simulated dermatoscope images were created in open modelling software, assuming a linear camera model. We have assumed ambient white lighting, with a 6mm distance to the camera.
Our model of color appearance was characterised by comparing the brightness of the lesion to its depth. The brightness of the lesion is compared through the variability of the mean gray values of a cropped region around the lesion. We compare two skin models, one without extensive chromophore content and one with. Our preliminary evaluation of increasing chromophore content shows promise based on the results presented here. Further refinement and validation of the model is ongoing
Enhanced appearance models for object tracking
©2007 IEEE.This paper is concerned with improving target appearance models to realize robust object tracking. We explore the use of feature space other than the commonly used color space for object tracking. Specifically, we employ gradient information to be used separately as well as in conjunction with color information. Our target appearance model is then represented in the form of a histogram using its gradient and color feature spaces, and frame-to-frame tracking is performed using mean shift or local exhaustive search. By combining gradients with color, we build new appearance models with combined feature spaces. Based on our extensive testing of these models, we find that they can be used to track complex objects, such as full 360-degree rotating objects, appearance-changing objects, occluding objects and zooming objects.An Zhao, Brooks, M.J. and Dick, A.R
Derivation and modelling hue uniformity and development of the IPT color space
Metric color spaces have been determined to be significantly non-uniform in the hue attribute of color appearance. Several independent sources have confirmed the non-uniformity. A data set was obtained during the course of this thesis work that contains the largest sampling of color space to date which can be used to compare models of color appearance. The data set obtained was compared to existing data sets and found to correspond closely. Lookup table methods were employed to test significant differences between data sets. A simple modeling approach was taken based on commonly understood color space models and knowledge of the visual system. Several color spaces can be derived using the simple model, and one was chosen that models hue uniformity very well and has other desirable attributes. This new color space is named IPT. Many visual data sets were plotted in the IPT color space and all show improved performance over industry standard color spaces. The IPT color space has applications in color data representation, gamut mapping, and color appearance modeling
Image appearance modeling
Traditional color appearance modeling has recently matured to the point that available, internationally-recommended models such as CIECAM02 are capable of making a wide range of predictions to within the observer variability in color matching and color scaling of stimuli in somewhat simplified viewing conditions. It is proposed that the next significant advances in the field of color appearance modeling will not come from evolutionary revisions of these models. Instead, a more revolutionary approach will be required to make appearance predictions for more complex stimuli in a wider array of viewing conditions. Such an approach can be considered image appearance modeling since it extends the concepts of color appearance modeling to stimuli and viewing environments that are spatially and temporally at the level of complexity of real natural and man-made scenes. This paper reviews the concepts of image appearance modeling, presents iCAM as one example of such a model, and provides a number of examples of the use of iCAM in still and moving image reproduction
Visual assessment of object color chroma and colorfulness
A series of visual experiments were designed to determine whether naive observers typically evaluate chroma or colorfulness when judging color appearance. A total of 7 observers were asked to determine a color appearance match between Munsell samples under the same illuminant (C) at different levels of illuminance. Color appearance matches were determined for 12 Munsell samples, under five reference and matching scene illuminance conditions, for four experimental techniques. The four experimental techniques were haploscopic, simultaneous inspection, successive inspection, and short-term memory matching. Results suggested that a chroma match was most important when observers were evaluating the color appearance of two scenes at different levels of illuminance. Results were also compared to predictions of two color appearance models. While similar trends were apparent between the experimental results and the two model\u27s predictions, only the Hunt model\u27s chroma term satisfactorily predicted experimental observations
- …