369 research outputs found
Colorimetric characterization of the Apple studio display (Flat panel LCD)
The colorimetric characterization of a flat-panel LCD monitor, the Apple Studio Display, using traditional CRT characterization techniques was evaluated. The results showed that the display performed up to the manufacturer\u27s specifications in terms of luminance and contrast. However, the traditional CRT gain-offset-gamma (GOG) model for characterization was inadequate and a model with one-dimensional lookup tables followed by a 3x3 matrix was developed. The LUT model performed excellently with average CIE94 color differences between measured and predicted colors of approximately 1.0
Appearance-based image splitting for HDR display systems
High dynamic range displays that incorporate two optically-coupled image planes have recently been developed. This dual image plane design requires that a given HDR input image be split into two complementary standard dynamic range components that drive the coupled systems, therefore there existing image splitting issue. In this research, two types of HDR display systems (hardcopy and softcopy HDR display) are constructed to facilitate the study of HDR image splitting algorithm for building HDR displays. A new HDR image splitting algorithm which incorporates iCAM06 image appearance model is proposed, seeking to create displayed HDR images that can provide better image quality. The new algorithm has potential to improve image details perception, colorfulness and better gamut utilization. Finally, the performance of the new iCAM06-based HDR image splitting algorithm is evaluated and compared with widely spread luminance square root algorithm through psychophysical studies
The effect of image size on the color appearance of image reproductions
Original and reproduced art are usually viewed under quite different viewing conditions. One of the interesting differences in viewing condition is size difference. The main focus of this research was investigation of the effect of image size on color perception of rendered images. This research had several goals. The first goal was to develop an experimental paradigm for measuring the effect of image size on color appearance. The second goal was to identify the most affected image attributes for changes of image size. The final goal was to design and evaluate algorithms to compensate for the change of visual angle (size). To achieve the first goal, an exploratory experiment was performed using a colorimetrically characterized digital projector and LCD. The projector and LCD were light emitting devices and in this sense were similar soft-copy media. The physical sizes of the reproduced images on the LCD and projector screen could be very different. Additionally, one could benefit from flexibility of soft-copy reproduction devices such as real-time image rendering, which is essential for adjustment experiments. The capability of the experimental paradigm in revealing the change of appearance for a change of visual angle (size) was demonstrated by conducting a paired-comparison experiment. Through contrast matching experiments, achromatic and chromatic contrast and mean luminance of an image were identified as the most affected attributes for changes of image size. Measurement of the extent and trend of changes for each attribute were measured using matching experiments. Proper algorithms to compensate for the image size effect were design and evaluated. The correction algorithms were tested versus traditional colorimetric image rendering using a paired-comparison technique. The paired-comparison results confirmed superiority of the algorithms over the traditional colorimetric image rendering for the size effect compensation
Colorimetric tolerances of various digital image displays
Visual experiments on four displays (two LCD, one CRT and hardcopy) were conducted to determine colorimetric tolerances of images systematically altered via three different transfer curves. The curves used were: Sigmoidal compression in L*, linear reduction in C*, and additive rotations in hab. More than 30 observers judged the detectability of these alterations on three pictorial images for each display. Standard probit analysis was then used to determine the detection thresholds for the alterations. It was found that the detection thresholds on LCD\u27s were similar or lower than for the CRT\u27s in this type of experiment. Summarizing pixel-by-pixel image differences using the 90th percentile color difference in E*ab was shown to be more consistent than similar measures in E94 and a prototype E2000. It was also shown that using the 90th percentile difference was more consistent than the average pixel wise difference. Furthermore, SCIELAB pre-filtering was shown to have little to no effect on the results of this experiment since only global color-changes were applied and no spatial alterations were used
The Effects of Multi-channel Visible Spectrum Imaging on Perceived Spatial Image Quality and Color Reproduction Accuracy
Two paired-comparison psychophysical experiments were performed. The stimuli consisted of six image types resultingfrom several multispectral image-capture and reconstruction techniques. A seventh image type, color-managed images from a high-end consumer camera, was also included in thefirst experiment to compare the accuracy of commercial RGB imaging. The images were evaluated under simulated daylight (6800K) and incandescent (2700K) illumination. The first experiment evaluated color reproduction accuracy. Under simulated daylight, the subjects judged all of the images to have the same color accuracy, except the consumer camera image which was significantly worse. Under incandescent illumination, all the images, including the consumer camera, had equivalent performance. The second experiment evaluated image quality. The results of this experiment were highly target dependent. A subsequent image registration experiment showed that the results of the image quality experiment were affected by image registration to some degree. An analysis of the color reproduction accuracy and image quality experiments combined showed that the consumer camera image type was preferred the least over all. The most preferred image types were the thirty-one-channel image type and both six-channel image types created using RGB filters along with a Wratten filter, with eigenvector analysis and pseudo-inverse transformations
Individual Colorimetric Observers for Personalized Color Imaging
Colors are typically described by three values such as RGB, XYZ, and HSV. This is rooted to the fact that humans possess three types of photoreceptors under photopic conditions, and human color vision can be characterized by a set of three color matching functions (CMFs). CMFs integrate spectra to produce three colorimetric values that are related to visual responses. In reality, large variations in CMFs exist among color-normal populations. Thus, a pair of two spectrally different stimuli might be a match for one person but a mismatch for another person, also known as observer metamerism.
Observer metamerism is a serious issue in color-critical applications such as soft proofing in graphic arts and color grading in digital cinema, where colors are compared on different displays. Due to observer metamerism, calibrated displays might not appear correctly, and one person might disagree with color adjustments made by another person. The recent advent of wide color gamut display technologies (e.g., LEDs, OLEDs, lasers, and Quantum Dots) has made observer metamerism even more serious due to their spectrally narrow primaries. The variations among normal color vision and observer metamerism have been overlooked for many years. The current typical color imaging workflow uses a single standard observer assuming all the color-normal people possess the same CMFs. This dissertation provides a possible solution for observer metamerism in color-critical applications by personalized color imaging introducing individual colorimetric observers.
In this dissertation, at first, color matching data were collected to derive and validate CMFs for individual colorimetric observers. The data from 151 color-normal observers were obtained at four different locations. Second, two types of individual colorimetric observer functions were derived and validated. One is an individual colorimetric observer model, an extension of the CIE 2006 physiological observer incorporating eight physiological parameters to model individuals in addition to age and field size inputs. The other is a set of categorical observer functions providing a more convenient approach towards the personalized color imaging. Third, two workflows were proposed to characterize human color vision: one using a nomaloscope and the other using proposed spectral pseudoisochromatic images. Finally, the personalized color imaging was evaluated in a color image matching study on an LCD monitor and a laser projector and in a perceived color difference study on a SHARP Quattron display. The personalized color imaging was implemented using a newly introduced ICC profile, iccMAX
Test Targets 7.0: A Collaborative effort exploring the use of scientific methods for color imaging and process control
Test Targets is a culmination of teaching and learning that reflects quality and analytic aspects of printing systems and their optimization. The creation of the Test Targets publication is a total experience that reflects the innovation, problem solving, and teamwork of the diverse team of faculty, staff, students, and professionals responsible for its contents and production
Eye tracking observers during color image evaluation tasks
This thesis investigated eye movement behavior of subjects during image-quality evaluation and chromatic adaptation tasks. Specifically, the objectives focused on learning where people center their attention during color preference judgments, examining the differences between paired comparison, rank order, and graphical rating tasks, and determining what strategies are adopted when selecting or adjusting achromatic regions on a soft-copy display. In judging the most preferred image, measures of fixation duration showed that observers spend about 4 seconds per image in the rank order task, 1.8 seconds per image in the paired comparison task, and 3.5 seconds per image in the graphical rating task. Spatial distributions of fixations across the three tasks were highly correlated in four of the five images. Peak areas of attention gravitated toward faces and semantic features. Introspective report was not always consistent with where people foveated, implying broader regions of importance than eye movement plots. Psychophysical results across these tasks generated similar, but not identical, scale values for three of the five images. The differences in scales are likely related to statistical treatment and image confusability, rather than eye movement behavior. In adjusting patches to appear achromatic, about 95% of the total adjustment time was spent fixating only on the patch. This result shows that even when participants are free to move their eyes in this kind of task, central adjustment patches can discourage normal image viewing behavior. When subjects did look around (less than 5% of the time), they did so early during the trial. Foveations were consistently directed toward semantic features, not shadows or achromatic surfaces. This result shows that viewers do not seek out near-neutral objects to ensure that their patch adjustments appear achromatic in the context of the scene. They also do not scan the image in order to adapt to a gray world average. As demonstrated in other studies, the mean chromaticity of the image influenced observers\u27 patch adjustments. Adaptation to the D93 white point was about 65% complete from D65. This result agrees reasonably with the time course of adaptation occurring over a 20 to 30 second exposure to the adapting illuminant. In selecting the most achromatic regions in the image, viewers spent 60% of the time scanning the scene. Unlike the achromatic patch adjustment task, foveations were consistently directed toward achromatic regions and near-neutral objects as would be expected. Eye movement records show behavior similar to what is expected from a visual search task
Evaluation of changes in image appearance with changes in displayed image size
This research focused on the quantification of changes in image appearance when images are displayed at different image sizes on LCD devices. The final results provided in calibrated Just Noticeable Differences (JNDs) on relevant perceptual scales, allowing the prediction of sharpness and contrast appearance with changes in the displayed image size.
A series of psychophysical experiments were conducted to enable appearance predictions. Firstly, a rank order experiment was carried out to identify the image attributes that were most affected by changes in displayed image size. Two digital cameras, exhibiting very different reproduction qualities, were employed to capture the same scenes, for the investigation of the effect of the original image quality on image appearance changes. A wide range of scenes with different scene properties was used as
a test-set for the investigation of image appearance changes with scene type. The outcomes indicated that sharpness and contrast were the most important attributes for the majority of scene types and original image qualities. Appearance matching experiments were further conducted to quantify changes in perceived sharpness and contrast with respect to changes in the displayed image size.
For the creation of sharpness matching stimuli, a set of frequency domain filters were designed to provide equal intervals in image quality, by taking into account the system’s Spatial Frequency Response (SFR) and the observation distance. For the creation of contrast matching stimuli, a series of spatial domain S-shaped filters were designed to provide equal intervals in image contrast, by gamma adjustments. Five displayed image sizes were investigated. Observers were always asked to match the appearance of the smaller version of each stimulus to its larger reference. Lastly, rating experiments were conducted to validate the derived JNDs in perceptual quality for both sharpness and contrast stimuli. Data obtained by these experiments finally converted into JND scales for each individual image attribute.
Linear functions were fitted to the final data, which allowed the prediction of image appearance of images viewed at larger sizes than these investigated in this research
- …