222,164 research outputs found

    Does Dehazing Model Preserve Color Information?

    No full text
    International audience—Image dehazing aims at estimating the image information lost caused by the presence of fog, haze and smoke in the scene during acquisition. Degradation causes a loss in contrast and color information, thus enhancement becomes an inevitable task in imaging applications and consumer photography. Color information has been mostly evaluated perceptually along with quality, but no work addresses specifically this aspect. We demonstrate how dehazing model affects color information on simulated and real images. We use a convergence model from perception of transparency to simulate haze on images. We evaluate color loss in terms of angle of hue in IPT color space, saturation in CIE LUV color space and perceived color difference in CIE LAB color space. Results indicate that saturation is critically changed and hue is changed for achromatic colors and blue/yellow colors, where usual image processing space are not showing constant hue lines. we suggest that a correction model based on color transparency perception could help to retrieve color information as an additive layer on dehazing algorithms

    A Design Method of Saturation Test Image Based on CIEDE2000

    Get PDF
    In order to generate color test image consistent with human perception in aspect of saturation, lightness, and hue of image, we propose a saturation test image design method based on CIEDE2000 color difference formula. This method exploits the subjective saturation parameter C′ of CIEDE2000 to get a series of test images with different saturation but same lightness and hue. It is found experimentally that the vision perception has linear relationship with the saturation parameter C′. This kind of saturation test image has various applications, such as in the checking of color masking effect in visual experiments and the testing of the visual effects of image similarity component

    Color in scientific visualization: Perception and image-based data display

    Get PDF
    Visualization is the transformation of information into a visual display that enhances users understanding and interpretation of the data. This thesis project has investigated the use of color and human vision modeling for visualization of image-based scientific data. Two preliminary psychophysical experiments were first conducted on uniform color patches to analyze the perception and understanding of different color attributes, which provided psychophysical evidence and guidance for the choice of color space/attributes for color encoding. Perceptual color scales were then designed for univariate and bivariate image data display and their effectiveness was evaluated through three psychophysical experiments. Some general guidelines were derived for effective color scales design. Extending to high-dimensional data, two visualization techniques were developed for hyperspectral imagery. The first approach takes advantage of the underlying relationships between PCA/ICA of hyperspectral images and the human opponent color model, and maps the first three PCs or ICs to several opponent color spaces including CIELAB, HSV, YCbCr, and YUV. The gray world assumption was adopted to automatically set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes. The second approach combines a true color image and a PCA image based on a biologically inspired visual attention model that simulates the center-surround structure of visual receptive fields as the difference between fine and coarse scales. The model was extended to take into account human contrast sensitivity and include high-level information such as the second order statistical structure in the form of local variance map, in addition to low-level features such as color, luminance, and orientation. It generates a topographic saliency map for both the true color image and the PCA image, a difference map is then derived and used as a mask to select interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve consistent natural appearance of the scene, while the selected attentional locations may be analyzed by more advanced algorithms

    Understanding perceived quality through visual representations

    Get PDF
    The formatting of images can be considered as an optimization problem, whose cost function is a quality assessment algorithm. There is a trade-off between bit budget per pixel and quality. To maximize the quality and minimize the bit budget, we need to measure the perceived quality. In this thesis, we focus on understanding perceived quality through visual representations that are based on visual system characteristics and color perception mechanisms. Specifically, we use the contrast sensitivity mechanisms in retinal ganglion cells and the suppression mechanisms in cortical neurons. We utilize color difference equations and color name distances to mimic pixel-wise color perception and a bio-inspired model to formulate center surround effects. Based on these formulations, we introduce two novel image quality estimators PerSIM and CSV, and a new image quality-assistance method BLeSS. We combine our findings from visual system and color perception with data-driven methods to generate visual representations and measure their quality. The majority of existing data-driven methods require subjective scores or degraded images. In contrast, we follow an unsupervised approach that only utilizes generic images. We introduce a novel unsupervised image quality estimator UNIQUE, and extend it with multiple models and layers to obtain MS-UNIQUE and DMS-UNIQUE. In addition to introducing quality estimators, we analyze the role of spatial pooling and boosting in image quality assessment.Ph.D

    On the Salience of Novel Stimuli: Adaptation and Image Noise

    Get PDF
    Webster has proposed “that adaptation increases the salience of novel stimuli by partially discounting the ambient background.” This is an excellent, concise, description of the purpose and function of chromatic adaptation in image reproduction applications. However, Webster was not limiting this proposal to just chromatic adaptation, but rather using it as a general description for all forms of perceptual adaptation. Demonstrations of adaption to other properties of image displays such as motion, blur, and spatial frequency led the authors to ponder the question of whether observers might adapt to the noise structure in images to enhance the novel stimuli — the systematic image content. This paper describes psychophysical measurements of noise adaptation in color image perception and explores mathematical prediction of the effect. The results illustrate the hypothesized pattern-dependent adaptation and its prediction through adaptation of a 2-D contrast sensitivity function in an image-appearance-model-based difference metric

    Assessment of #TheDress With Traditional Color Vision Tests: Perception Differences Are Associated With Blueness

    Get PDF
    Based on known color vision theories, there is no complete explanation for the perceptual dichotomy of #TheDress in which most people see either white-and-gold (WG) or blue-and-black (BK). We determined whether some standard color vision tests (i.e., color naming, color matching, anomaloscope settings, unique white settings, and color preferences), as well as chronotypes, could provide information on the color perceptions of #TheDress. Fifty-two young observers were tested. Fifteen of the observers (29%) reported the colors as BK, 21 (40%) as WG, and 16 (31%) reported a different combination of colors. Observers who perceived WG required significantly more blue in their unique white settings than those who perceived BK. The BK, blue-and-gold, and WG observer groups had significantly different color preferences for the light cyan chip. Moreland equation anomaloscope matching showed a significant difference between WG and BK observers. In addition, #TheDress color perception categories, color preference outcomes, and unique white settings had a common association. For both the bright and dark regions of #TheDress, the color matching chromaticities formed a continuum, approximately following the daylight chromaticity locus. Color matching to the bright region of #TheDress showed two nearly distinct clusters (WG vs. BK) along the daylight chromaticity locus and there was a clear cutoff for reporting WG versus BK. All results showing a significant difference involved blue percepts, possibly due to interpretations of the illuminant interactions with the dress material. This suggests that variations in attributing blueness to the #TheDress image may be significant variables determining color perception of #TheDress.Fil: Feitosa-Santana, Claudia. Universidade Federal do ABC; BrasilFil: Lutze, Margaret. Depaul University; Estados UnidosFil: Barrionuevo, Pablo Alejandro. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto de Investigación en Luz, Ambiente y Visión. Universidad Nacional de Tucumán. Facultad de Ciencias Exactas y Tecnología. Instituto de Investigación en Luz, Ambiente y Visión; ArgentinaFil: Cao, Dingcai. University of Illinois; Estados Unido

    Scaling lightness perception and differences above and below diffuse white and modifying color spaces for high-dynamic-range scenes and images

    Get PDF
    The first purpose of this thesis was to design and complete psychophysical experiments for scaling lightness and lightness differences for achromatic percepts above and below the lightness of diffuse white (L*=100). Below diffuse white experiments were conducted under reference conditions recommended by CIE for color difference research. Overall a range of CIELAB lightness values from 7 to 183 was investigated. Psychophysical techniques of partition scaling and constant stimuli were applied for scaling lightness perception and differences, respectively. The results indicate that the existing L* and CIEDE2000-weighting functions approximately predict the trends, but don\u27t well fit the visual data. Hence, three optimized functions are proposed, including a lightness function, a lightness-difference weighting function for the wide range, and a lightness-difference weighting function for the range below diffuse white. The second purpose of this thesis was to modify the color spaces for high-dynamic-range scenes and images. Traditional color spaces have been widely used in a variety of applications including digital color imaging, color image quality, and color management. These spaces, however, were designed for the domain of color stimuli typically encountered with reflecting objects and image displays of such objects. This means the domain of stimuli with luminance levels from slightly above zero to that of a perfect diffuse white (or display white point). This limits the applicability of such spaces to color problems in high-dynamic-range (HDR) imaging. This is caused by their hard intercepts at zero luminance/lightness and by their uncertain applicability for colors brighter than diffuse white. To address HDR applications, two new color spaces were recently proposed by Fairchild and Wyble: hdr-CIELAB and hdr-IPT. They are based on replacing the power-function nonlinearities in CIELAB and IPT with more physiologically plausible hyperbolic functions optimized to most closely simulate the original color spaces in the diffuse reflecting color domain. This thesis presents the formulation of the new models, evaluations using Munsell data in comparison with CIELAB, IPT, and CIECAM02, two sets of lightness-scaling data above diffuse white, and various possible formulations of hdr-CIELAB and hdr-IPT to predict the visual results

    Sketch Plus Colorization Deep Convolutional Neural Networks for Photos Generation from Sketches

    Get PDF
    In this paper, we introduce a method to generate photos from sketches using Deep Convolutional Neural Networks (DCNN). This research proposes a method by combining a network to invert sketches into photos (sketch inversion net) with a network to predict color given grayscale images (colorization net). By using this method, the quality of generated photos is expected to be more similar to the actual photos. We first artificially constructed uncontrolled conditions for the dataset. The dataset, which consists of hand-drawn sketches and their corresponding photos, were pre-processed using several data augmentation techniques to train the models in addressing the issues of rotation, scaling, shape, noise, and positioning. Validation was measured using two types of similarity measurements: pixel- difference based and human visual system (HVS) which mimics human perception in evaluating the quality of an image. The pixel- difference based metric consists of Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) while the HVS consists of Universal Image Quality Index (UIQI) and Structural Similarity (SSIM). Our method gives the best quality of generated photos for all measures (844.04 for MSE, 19.06 for PSNR, 0.47 for UIQI, and 0.66 for SSIM)
    corecore