523 research outputs found

    Perceptually Meaningful Image Editing: Depth

    Get PDF
    We introduce the concept of perceptually meaningful image editing and present two techniques for manipulating the apparent depth of objects in an image. The user loads an image, selects an object and specifies whether the object should appear closer or further away. The system automatically determines target values for the object and/or background that achieve the desired depth change. These depth editing operations, based on techniques used by traditional artists, manipulate either the luminance or color temperature of different regions of the image. By performing blending in the gradient domain and reconstruction with a Poisson solver, the appearance of false edges is minimized. The results of a preliminary user study, designed to evaluate the effectiveness of these techniques, are also presented

    The Real Effect of Warm-Cool Colors

    Get PDF
    The phenomenon of warmer colors appearing nearer in depth to viewers than cooler colors has been studied extensively by psychologists and other vision researchers. The vast majority of these studies have asked human observers to view physically equidistant, colored stimuli and compare them for relative depth. However, in most cases, the stimuli presented were rather simple: straight colored lines, uniform color patches, point light sources, or symmetrical objects with uniform shading. Additionally, the colors used were typically highly saturated. Although such stimuli are useful in isolating and studying depth cues in certain contexts, they leave open the question of whether the human visual system operates similarly for realistic objects. This paper presents the results of an experiment designed to explore the color-depth relationship for realistic, colored objects with varying shading and contour

    The Effect of Object Color on Depth Ordering

    Get PDF
    The relationship between color and perceived depth for realistic, colored objects with varying shading was explored. Background: Studies have shown that warm-colored stimuli tend to appear nearer in depth than cool-colored stimuli. The majority of these studies asked human observers to view physically equidistant, colored stimuli and compare them for relative depth. However, in most cases, the stimuli presented were rather simple: straight colored lines, uniform color patches, point light sources, or symmetrical objects with uniform shading. Additionally, the colors were typically highly saturated. Although such stimuli are useful for isolating and studying depth cues in certain contexts, they leave open the question of whether the human visual system operates similarly for realistic objects. Method: Participants were presented with all possible pairs from a set of differently colored objects and were asked to select the object in each pair that appears closest to them. The objects were presented on a standard computer screen, against 4 different uniform backgrounds of varying intensity. Results: Our results show that the relative strength of color as a depth cue increases when the colored stimuli are presented against darker backgrounds and decreases when presented against lighter backgrounds. Conclusion: Color does impact our depth perception even though it is a relatively weak indicator and is not necessarily the overriding depth cue for complex, realistic objects. Application: Our observations can be used to guide the selection of color to enhance the perceived depth of objects presented on traditional display devices and newer immersive virtual environments

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Depicting shape, materials and lighting: observation, formulation and implementation of artistic principles

    Get PDF
    The appearance of a scene results from complex interactions between the geometry, materials and lights that compose that scene. While Computer Graphics algorithms are now capable of simulating these interactions, it comes at the cost of tedious 3D modeling of a virtual scene, which only well-trained artists can do. In contrast, photographs allow the instantaneous capture of a scene, but shape, materials and lighting are difficult to manipulate directly in the image. Drawings can also suggest real or imaginary scenes with a few lines but creating convincing illustrations requires significant artistic skills.The goal of my research is to facilitate the creation and manipulation of shape, materials and lighting in drawings and photographs, for laymen and professional artists alike. This document first presents my work on computer-assisted drawing where I proposed algorithms to automate the depiction of materials in line drawings as well as to estimate a 3D model from design sketches. I also worked on user interfaces to assist beginners in learning traditional drawing techniques. Through the development of these projects I have formalized a general methodology to observe how artists work, deduce artistic principles from these observations, and implement these principles as algorithms. In the second part of this document I present my work on relighting multiple photographs of a scene, for which we first need to estimate the materials and lighting that compose that scene. The main novelty of our approach is to combine image analysis and lighting simulation in order to reason about the scene despite the lack of an accurate 3D model

    Fehlerkaschierte Bildbasierte Darstellungsverfahren

    Get PDF
    Creating photo-realistic images has been one of the major goals in computer graphics since its early days. Instead of modeling the complexity of nature with standard modeling tools, image-based approaches aim at exploiting real-world footage directly,as they are photo-realistic by definition. A drawback of these approaches has always been that the composition or combination of different sources is a non-trivial task, often resulting in annoying visible artifacts. In this thesis we focus on different techniques to diminish visible artifacts when combining multiple images in a common image domain. The results are either novel images, when dealing with the composition task of multiple images, or novel video sequences rendered in real-time, when dealing with video footage from multiple cameras.Fotorealismus ist seit jeher eines der großen Ziele in der Computergrafik. Anstatt die Komplexität der Natur mit standardisierten Modellierungswerkzeugen nachzubauen, gehen bildbasierte Ansätze den umgekehrten Weg und verwenden reale Bildaufnahmen zur Modellierung, da diese bereits per Definition fotorealistisch sind. Ein Nachteil dieser Variante ist jedoch, dass die Komposition oder Kombination mehrerer Quellbilder eine nichttriviale Aufgabe darstellt und häufig unangenehm auffallende Artefakte im erzeugten Bild nach sich zieht. In dieser Dissertation werden verschiedene Ansätze verfolgt, um Artefakte zu verhindern oder abzuschwächen, welche durch die Komposition oder Kombination mehrerer Bilder in einer gemeinsamen Bilddomäne entstehen. Im Ergebnis liefern die vorgestellten Verfahren neue Bilder oder neue Ansichten einer Bildsammlung oder Videosequenz, je nachdem, ob die jeweilige Aufgabe die Komposition mehrerer Bilder ist oder die Kombination mehrerer Videos verschiedener Kameras darstellt

    Efficient image-based rendering

    Get PDF
    Recent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.Jüngste Fortschritte im Bereich Echtzeit-Raytracing und Deep Learning haben den Realismus computergenerierter Bilder erheblich verbessert. Konventionelle 3DComputergrafik (CG) kann jedoch nach wie vor zeit- und ressourcenintensiv sein, insbesondere bei der Erstellung fotorealistischer Simulationen von komplexen oder animierten Szenen. Das bildbasierte Rendering (IBR) hat sich als alternativer Ansatz herauskristallisiert, bei dem vorab aufgenommene Bilder aus der realen Welt verwendet werden, um realistische Bilder in Echtzeit zu erzeugen, so dass keine umfangreiche Modellierung erforderlich ist. Obwohl IBR seine Vorteile hat, ist es eine Herausforderung, das gleiche Maß an Kontrolle über Szenenattribute zu bieten wie traditionelle CG-Pipelines und komplexe Szenen und Objekte mit unterschiedlichen Materialien, wie z.B. transparente Objekte, akkurat wiederzugeben. In dieser Arbeit wird versucht, diese Probleme zu lösen, indem die Möglichkeiten des Deep Learning genutzt und die grundlegenden Prinzipien der Grafik und des physikalisch basierten Renderings einbezogen werden. Sie bietet eine effiziente Lösung, die eine interaktive Manipulation von dynamischen Szenen aus der realen Welt ermöglicht, die aus spärlichen Ansichten, Beleuchtungspositionen und Zeiten erfasst wurden, sowie einen physikalisch basierten Ansatz, der eine genaue Reproduktion des Effekts der Sichtabhängigkeit ermöglicht, der sich aus der Interaktion zwischen transparenten Objekten und ihrer Umgebung ergibt. Darüber hinaus wird in dieser Arbeit eine Sichtbarkeitsmetrik entwickelt, mit der Artefakte in den rekonstruierten IBR-Bildern identifiziert werden können, ohne das Referenzbild zu betrachten, und die somit zur Entwicklung einer effektiven IBR-Erfassungspipeline beiträgt. Schließlich wird ein wahrnehmungsgesteuertes Rendering-Verfahren entwickelt, um visuelle Inhalte in Virtual-Reality-Displays mit hoherWiedergabetreue zu liefern und gleichzeitig die Rechenleistung zu erhalten

    An Art Educators\u27 Perception of an Art Professional Development Workshop

    Get PDF
    There are no guidelines in South Carolina for developing workshops that reflect the needs of art educators, and there are no tools to evaluate and support their professional development. The problem is a lack of informative, substantive, and academically oriented art inservices that are standards-based and focused on the enhancement of pedagogy, teaching strategies, and content. The purpose of this case study was to explore participants\u27 perceptions of an art professional development workshop as an approach to examining art standards, instructional strategies, and policy changes. Dewey\u27s experiential theory served as the conceptual framework. A purposeful sample of 10 art educators who attended a district-sponsored professional development workshop participated in this study. After the workshop, data about educators\u27 perceptions of the inservice were collected through a beta test and a focus group with 2 participants, 1 open-ended questionnaire with 8 participants, and a workshop observation with 20 participants. Data were analyzed using comparative analysis to identify patterns in the data. Member checking and triangulation were used to verify the data and control bias. Five themes emerged from the data: adult-centered hands-on learning, professional development experiences, grants, collaboration and networking, and best practices. This study contributes to social change by showing the importance of on-going adult-centered, research-based, hands-on professional development for educators addressing visual art standards, practice, instructional strategies, policy changes, and the facilitation of student-centered activities

    Higher level techniques for the artistic rendering of images and video

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore