1,916 research outputs found

    Brilliance, contrast, colorfulness, and the perceived volume of device color gamut

    Get PDF
    With the advent of digital video and cinema media technologies, much more is possible in achieving brighter and more vibrant colors, colors that transcend our experience. The challenge is in the realization of these possibilities in an industry rooted in 1950s technology where color gamut is represented with little or no insight into the way an observer perceives color as a complex mixture of the observer’s intentions, desires, and interests. By today’s standards, five perceptual attributes – brightness, lightness, colorfulness, chroma, and hue - are believed to be required for a complete specification. As a compelling case for such a representation, a display system is demonstrated that is capable of displaying color beyond the realm of object color, perceptually even beyond the spectrum locus of pure color. All this begs the question: Just what is meant by perceptual gamut? To this end, the attributes of perceptual gamut are identified through psychometric testing and the color appearance models CIELAB and CIECAM02. Then, by way of demonstration, these attributes were manipulated to test their application in wide gamut displays. In concert with these perceptual attributes and their manipulation, Ralph M. Evans’ concept of brilliance as an attribute of perception that extends beyond the realm of everyday experience, and the theoretical studies of brilliance by Y. Nayatani, a method was developed for producing brighter, more colorful colors and deeper, darker colors with the aim of preserving object color perception – flesh tones in particular. The method was successfully demonstrated and tested in real images using psychophysical methods in the very real, practical application of expanding the gamut of sRGB into an emulation of the wide gamut, xvYCC encoding

    Human-centered display design : balancing technology & perception

    Get PDF

    Gamut extension algorithm development and evaluation for the mapping of standard image content to wide-gamut displays

    Get PDF
    Wide-gamut display technology has provided an excellent opportunity to produce visually pleasing images, more so than in the past. However, through several studies, including Laird and Heynderick, 2008, it was shown that linearly mapping the standard sRGB content to the gamut boundary of a given wide-gamut display may not result in optimal results. Therefore, several algorithms were developed and evaluated for observer preference, including both linear and sigmoidal expansion algorithms, in an effort to define a single, versatile gamut expansion algorithm (GEA) that can be applied to current display technology and produce the most preferable images for observers. The outcome provided preference results from two displays, both of which resulted in large scene dependencies. However, the sigmoidal GEAs (SGEA) were competitive with the linear GEAs (LGEA), and in many cases, resulted in more pleasing reproductions. The SGEAs provide an excellent baseline, in which, with minor improvements, could be key to producing more impressive images on a wide-gamut display

    Depth-Assisted Semantic Segmentation, Image Enhancement and Parametric Modeling

    Get PDF
    This dissertation addresses the problem of employing 3D depth information on solving a number of traditional challenging computer vision/graphics problems. Humans have the abilities of perceiving the depth information in 3D world, which enable humans to reconstruct layouts, recognize objects and understand the geometric space and semantic meanings of the visual world. Therefore it is significant to explore how the 3D depth information can be utilized by computer vision systems to mimic such abilities of humans. This dissertation aims at employing 3D depth information to solve vision/graphics problems in the following aspects: scene understanding, image enhancements and 3D reconstruction and modeling. In addressing scene understanding problem, we present a framework for semantic segmentation and object recognition on urban video sequence only using dense depth maps recovered from the video. Five view-independent 3D features that vary with object class are extracted from dense depth maps and used for segmenting and recognizing different object classes in street scene images. We demonstrate a scene parsing algorithm that uses only dense 3D depth information to outperform using sparse 3D or 2D appearance features. In addressing image enhancement problem, we present a framework to overcome the imperfections of personal photographs of tourist sites using the rich information provided by large-scale internet photo collections (IPCs). By augmenting personal 2D images with 3D information reconstructed from IPCs, we address a number of traditionally challenging image enhancement techniques and achieve high-quality results using simple and robust algorithms. In addressing 3D reconstruction and modeling problem, we focus on parametric modeling of flower petals, the most distinctive part of a plant. The complex structure, severe occlusions and wide variations make the reconstruction of their 3D models a challenging task. We overcome these challenges by combining data driven modeling techniques with domain knowledge from botany. Taking a 3D point cloud of an input flower scanned from a single view, each segmented petal is fitted with a scale-invariant morphable petal shape model, which is constructed from individually scanned 3D exemplar petals. Novel constraints based on botany studies are incorporated into the fitting process for realistically reconstructing occluded regions and maintaining correct 3D spatial relations. The main contribution of the dissertation is in the intelligent usage of 3D depth information on solving traditional challenging vision/graphics problems. By developing some advanced algorithms either automatically or with minimum user interaction, the goal of this dissertation is to demonstrate that computed 3D depth behind the multiple images contains rich information of the visual world and therefore can be intelligently utilized to recognize/ understand semantic meanings of scenes, efficiently enhance and augment single 2D images, and reconstruct high-quality 3D models

    Contours and contrast

    Get PDF
    Contrast in photographic and computer-generated imagery communicates colour and lightness differences that would be perceived when viewing the represented scene. Due to depiction constraints, the amount of displayable contrast is limited, reducing the image's ability to accurately represent the scene. A local contrast enhancement technique called unsharp masking can overcome these constraints by adding high-frequency contours to an image that increase its apparent contrast. In three novel algorithms inspired by unsharp masking, specialized local contrast enhancements are shown to overcome constraints of a limited dynamic range, overcome an achromatic palette, and to improve the rendering of 3D shapes and scenes. The Beyond Tone Mapping approach restores original HDR contrast to its tone mapped LDR counterpart by adding highfrequency colour contours to the LDR image while preserving its luminance. Apparent Greyscale is a multi-scale two-step technique that first converts colour images and video to greyscale according to their chromatic lightness, then restores diminished colour contrast with high-frequency luminance contours. Finally, 3D Unsharp Masking performs scene coherent enhancement by introducing 3D high-frequency luminance contours to emphasize the details, shapes, tonal range and spatial organization of a 3D scene within the rendering pipeline. As a perceptual justification, it is argued that a local contrast enhancement made with unsharp masking is related to the Cornsweet illusion, and that this may explain its effect on apparent contrast.Seit vielen Jahren ist die realistische Erzeugung von virtuellen Charakteren ein zentraler Teil der Computergraphikforschung. Dennoch blieben bisher einige Probleme ungelöst. Dazu zählt unter anderem die Erzeugung von Charakteranimationen, welche unter der Benutzung der traditionellen, skelettbasierten Ansätze immer noch zeitaufwändig sind. Eine weitere Herausforderung stellt auch die passive Erfassung von Schauspielern in alltäglicher Kleidung dar. Darüber hinaus existieren im Gegensatz zu den zahlreichen skelettbasierten Ansätzen nur wenige Methoden zur Verarbeitung und Veränderung von Netzanimationen. In dieser Arbeit präsentieren wir Algorithmen zur Lösung jeder dieser Aufgaben. Unser erster Ansatz besteht aus zwei Netz-basierten Verfahren zur Vereinfachung von Charakteranimationen. Obwohl das kinematische Skelett beiseite gelegt wird, können beide Verfahren direkt in die traditionelle Pipeline integriert werden, wobei die Erstellung von Animationen mit wirklichkeitsgetreuen Körperverformungen ermöglicht wird. Im Anschluss präsentieren wir drei passive Aufnahmemethoden für Körperbewegung und Schauspiel, die ein deformierbares 3D-Modell zur Repräsentation der Szene benutzen. Diese Methoden können zur gemeinsamen Rekonstruktion von zeit- und raummässig kohärenter Geometrie, Bewegung und Oberflächentexturen benutzt werden, die auch zeitlich veränderlich sein dürfen. Aufnahmen von lockerer und alltäglicher Kleidung sind dabei problemlos möglich. Darüber hinaus ermöglichen die qualitativ hochwertigen Rekonstruktionen die realistische Darstellung von 3D Video-Sequenzen. Schließlich werden zwei neuartige Algorithmen zur Verarbeitung von Netz-Animationen beschrieben. Während der erste Algorithmus die vollautomatische Umwandlung von Netz-Animationen in skelettbasierte Animationen ermöglicht, erlaubt der zweite die automatische Konvertierung von Netz-Animationen in so genannte Animations-Collagen, einem neuen Kunst-Stil zur Animationsdarstellung. Die in dieser Dissertation beschriebenen Methoden können als Lösungen spezieller Probleme, aber auch als wichtige Bausteine größerer Anwendungen betrachtet werden. Zusammengenommen bilden sie ein leistungsfähiges System zur akkuraten Erfassung, zur Manipulation und zum realistischen Rendern von künstlerischen Aufführungen, dessen Fähigkeiten über diejenigen vieler verwandter Capture-Techniken hinausgehen. Auf diese Weise können wir die Bewegung, die im Zeitverlauf variierenden Details und die Textur-Informationen eines Schauspielers erfassen und sie in eine mit vollständiger Information versehene Charakter-Animation umwandeln, die unmittelbar weiterverwendet werden kann, sich aber auch zur realistischen Darstellung des Schauspielers aus beliebigen Blickrichtungen eignet

    Preferred color correction for mixed taking-illuminant placement and cropping

    Get PDF
    The growth of automatic layout capabilities for publications such as photo books and image sharing websites enables consumers to create personalized presentations without much experience or the use of professional page design software. Automated color correction of images has been well studied over the years, but the methodology for determining how to correct images has almost exclusively considered images as independent indivisible objects. In modern documents, such as photo books or web sharing sites, images are automatically placed on pages in juxtaposition to others and some images are automatically cropped. Understanding how color correction preferences are impacted by complex arrangements has become important. A small number of photographs taken under a variety illumination conditions were presented to observers both individually and in combinations. Cropped and uncropped versions of the shots were included. Users had opportunities to set preferred color balance and chroma for the images within the experiment. Analyses point toward trends indicating a preference for higher chroma for most cropped images in comparison to settings for the full spatial extent images. It is also shown that observers make different color balance choices when correcting an image in isolation versus when correcting the same image in the presence of a second shot taken under a different illuminant. Across 84 responses, approximately 60% showed the tendency to choose image white points that were further from the display white point when multiple images from different taking illuminants were simultaneously presented versus when the images were adjusted in isolation on the same display. Observers were also shown to preserve the relative white point bias of the original taking illuminants

    Perceptually optimal boundaries for wide gamut TVs

    Full text link
    • …
    corecore