742 research outputs found

    Appearance-based image splitting for HDR display systems

    Get PDF
    High dynamic range displays that incorporate two optically-coupled image planes have recently been developed. This dual image plane design requires that a given HDR input image be split into two complementary standard dynamic range components that drive the coupled systems, therefore there existing image splitting issue. In this research, two types of HDR display systems (hardcopy and softcopy HDR display) are constructed to facilitate the study of HDR image splitting algorithm for building HDR displays. A new HDR image splitting algorithm which incorporates iCAM06 image appearance model is proposed, seeking to create displayed HDR images that can provide better image quality. The new algorithm has potential to improve image details perception, colorfulness and better gamut utilization. Finally, the performance of the new iCAM06-based HDR image splitting algorithm is evaluated and compared with widely spread luminance square root algorithm through psychophysical studies

    Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display

    Get PDF
    Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE’s 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers

    Lightness, Brightness, and Transparency in Optical See-Through Augmented Reality

    Get PDF
    Augmented reality (AR), as a key component of the future metaverse, has leaped from the research labs to the consumer and enterprise markets. AR optical see-through (OST) devices utilize transparent optical combiners to provide visibility of the real environment as well as superimpose virtual content on top of it. OST displays distinct from existing media because of their optical additivity, meaning the light reaching the eyes is composed of both virtual content and real background. The composition results in the intended virtual colors being distorted and perceived transparent. When the luminance of the virtual content decreases, the perceived lightness and brightness decrease, and the perceived transparency increases. Lightness, brightness, and transparency are modulated by one physical dimension (luminance), and all interact with the background and each other. In this research, we aim to identify and quantify the three perceptual dimensions, as well as build mathematical models to predict them. In the first part of the study, we focused on the perceived brightness and lightness with two experiments: a brightness partition scaling experiment to build brightness scales, and a diffuse white adjustment experiment to determine the absolute luminance level required for diffuse white appearances on 2D and 3D AR stimuli. The second part of the research targeted at the perceived transparency in the AR environment with three experiments. The transparency was modulated by the background Michelson contrast reduction in either average luminance or peak-to-peak luminance difference to investigate, and later illustrated, the fundamental mechanism evoking transparency perception. The first experiment measured the transparency detection thresholds and confirmed that contrast sensitivity functions with contrast adaptation could model the thresholds. Subsequently, the transparency perception was investigated through direct anchored scaling experiment by building perceived transparency scales from the virtual content contrast ratio to the background. A contrast-ratio-based model was proposed predicting the perceived transparency scales. Finally, the transparency equivalency experiment between the two types of contrast modulation confirmed the mechanism difference and validated the proposed model

    Camera based Display Image Quality Assessment

    Get PDF
    This thesis presents the outcomes of research carried out by the PhD candidate Ping Zhao during 2012 to 2015 in Gjøvik University College. The underlying research was a part of the HyPerCept project, in the program of Strategic Projects for University Colleges, which was funded by The Research Council of Norway. The research was engaged under the supervision of Professor Jon Yngve Hardeberg and co-supervision of Associate Professor Marius Pedersen, from The Norwegian Colour and Visual Computing Laboratory, in the Faculty of Computer Science and Media Technology of Gjøvik University College; as well as the co-supervision of Associate Professor Jean-Baptiste Thomas, from The Laboratoire Electronique, Informatique et Image, in the Faculty of Computer Science of Universit´e de Bourgogne. The main goal of this research was to develop a fast and an inexpensive camera based display image quality assessment framework. Due to the limited time frame, we decided to focus only on projection displays with static images displayed on them. However, the proposed methods were not limited to projection displays, and they were expected to work with other types of displays, such as desktop monitors, laptop screens, smart phone screens, etc., with limited modifications. The primary contributions from this research can be summarized as follows: 1. We proposed a camera based display image quality assessment framework, which was originally designed for projection displays but it can be used for other types of displays with limited modifications. 2. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact, which is mainly introduced by the camera lens. 3. We proposed a method to optimize the camera’s exposure with respect to the measured luminance of incident light, so that after the calibration all camera sensors share a common linear response region. 4. We proposed a marker-less and view-independent method to register one captured image with its original at a sub-pixel level, so that we can incorporate existing full reference image quality metrics without modifying them. 5. We identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays, and we used the proposed framework to evaluate the prediction performance of the state-of-the-art image quality metrics regarding these attributes. The proposed image quality assessment framework is the core contribution of this research. Comparing to conventional image quality assessment approaches, which were largely based on the measurements of colorimeter or spectroradiometer, using camera as the acquisition device has the advantages of quickly recording all displayed pixels in one shot, relatively inexpensive to purchase the instrument. Therefore, the consumption of time and resources for image quality assessment can be largely reduced. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact primarily introduced by the camera lens. We used a hazy sky as a closely uniform light source, and the vignetting mask was generated with respect to the median sensor responses over i only a few rotated shots of the same spot on the sky. We also proposed a method to quickly determine whether all camera sensors were sharing a common linear response region. In order to incorporate existing full reference image quality metrics without modifying them, an accurate registration of pairs of pixels between one captured image and its original is required. We proposed a marker-less and view-independent image registration method to solve this problem. The experimental results proved that the proposed method worked well in the viewing conditions with a low ambient light. We further identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays. Subsequently, we used the developed framework to objectively evaluate the prediction performance of the state-of-art image quality metrics regarding these attributes in a robust manner. In this process, the metrics were benchmarked with respect to the correlations between the prediction results and the perceptual ratings collected from subjective experiments. The analysis of the experimental results indicated that our proposed methods were effective and efficient. Subjective experiment is an essential component for image quality assessment; however it can be time and resource consuming, especially in the cases that additional image distortion levels are required to extend the existing subjective experimental results. For this reason, we investigated the possibility of extending subjective experiments with baseline adjustment method, and we found that the method could work well if appropriate strategies were applied. The underlying strategies referred to the best distortion levels to be included in the baseline, as well as the number of them

    Evaluation of changes in image appearance with changes in displayed image size

    Get PDF
    This research focused on the quantification of changes in image appearance when images are displayed at different image sizes on LCD devices. The final results provided in calibrated Just Noticeable Differences (JNDs) on relevant perceptual scales, allowing the prediction of sharpness and contrast appearance with changes in the displayed image size. A series of psychophysical experiments were conducted to enable appearance predictions. Firstly, a rank order experiment was carried out to identify the image attributes that were most affected by changes in displayed image size. Two digital cameras, exhibiting very different reproduction qualities, were employed to capture the same scenes, for the investigation of the effect of the original image quality on image appearance changes. A wide range of scenes with different scene properties was used as a test-set for the investigation of image appearance changes with scene type. The outcomes indicated that sharpness and contrast were the most important attributes for the majority of scene types and original image qualities. Appearance matching experiments were further conducted to quantify changes in perceived sharpness and contrast with respect to changes in the displayed image size. For the creation of sharpness matching stimuli, a set of frequency domain filters were designed to provide equal intervals in image quality, by taking into account the system’s Spatial Frequency Response (SFR) and the observation distance. For the creation of contrast matching stimuli, a series of spatial domain S-shaped filters were designed to provide equal intervals in image contrast, by gamma adjustments. Five displayed image sizes were investigated. Observers were always asked to match the appearance of the smaller version of each stimulus to its larger reference. Lastly, rating experiments were conducted to validate the derived JNDs in perceptual quality for both sharpness and contrast stimuli. Data obtained by these experiments finally converted into JND scales for each individual image attribute. Linear functions were fitted to the final data, which allowed the prediction of image appearance of images viewed at larger sizes than these investigated in this research

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    Human-centered display design : balancing technology & perception

    Get PDF

    The LLAB model for quantifying colour appearance

    Get PDF
    A reliable colour appearance model is desired by industry to achieve high colour fidelity between images produced using a range of different imaging devices. The aim of this study was to derive a reliable colour appearance model capable of predicting the change of perceived attributes of colour appearance under a wide range of media/viewing conditions. The research was divided into three parts: characterising imaging devices, conducting a psychophysical experiment, and developing a colour appearance model. Various imaging devices were characterised including a graphic art scanner, a Cromalin proofing system, an IRIS ink jet printer, and a Barco Calibrator. For the former three devices, each colour is described by four primaries: cyan (C), magenta (M), yellow (Y), and black (K). Three set of characterisation samples (120 and 31 black printer, and cube data sets) were produced and measured for deriving and testing the printing characterisation models. Four black printer algorithms (BPA), were derived. Each included both forward and reverse processes. A 2nd BPA printing model taking into account additivity failure, grey component replacement (GCR) algorithm gave the most accurate prediction to the characterisation data set than the other BPA models. The PLCC (Piecewise Linear interpolation assuming Constant Chromaticity coordinates) monitor model was also implemented to characterise the Barco monitor. The psychophysical experiment was conducted to compare Cromalin hardcopy images viewed in a viewing cabinet and softcopy images presented on a monitor under a wide range of illuminants (white points) including: D93, D65, D50 and A. Two scaling methods: category judgement and paired comparison, were employed by viewing a pair of images. Three classes of colour models were evaluated: uniform colour spaces, colour appearance models and chromatic adaptation transforms. Six images were selected and processed via each colour model. The results indicated that the BFD chromatic transform gave the most accurate predictions of the visual results. Finally, a colour appearance model, LLAB, was developed. It is a combination of the BFD chromatic transform and a modified version of CIELAB uniform colour space to fit the LUTCRI Colour Appearance Data previously accumulated. The form of the LLAB model is much simpler and its performance is more precise to fit experimental data than those of the other models
    • …
    corecore