11 research outputs found

    Analyzing interfaces and workflows for light field editing

    Get PDF
    With the increasing number of available consumer light field cameras, such as Lytro, Raytrix, or Pelican Imaging, this new form of photography is progressively becoming more common. However, there are still very few tools for light field editing, and the interfaces to create those edits remain largely unexplored. Given the extended dimensionality of light field data, it is not clear what the most intuitive interfaces and optimal workflows are, in contrast with well-studied two-dimensional (2-D) image manipulation software. In this work, we provide a detailed description of subjects' performance and preferences for a number of simple editing tasks, which form the basis for more complex operations. We perform a detailed state sequence analysis and hidden Markov chain analysis based on the sequence of tools and interaction paradigms users employ while editing light fields. These insights can aid researchers and designers in creating new light field editing tools and interfaces, thus helping to close the gap between 4-D and 2-D image editing

    Transferring Image-based Edits for Multi-Channel Compositing

    Get PDF
    A common way to generate high-quality product images is to start with a physically-based render of a 3D scene, apply image-based edits on individual render channels, and then composite the edited channels together (in some cases, on top of a background photograph). This workflow requires users to manually select the right render channels, prescribe channel-specific masks, and set appropriate edit parameters. Unfortunately, such edits cannot be easily reused for global variations of the original scene, such as a rigid-body transformation of the 3D objects or a modified viewpoint, which discourages iterative refinement of both global scene changes and image-based edits. We propose a method to automatically transfer such user edits across variations of object geometry, illumination, and viewpoint. This transfer problem is challenging since many edits may be visually plausible but non-physical, with a successful transfer dependent on an unknown set of scene attributes that may include both photometric and non-photometric features. To address this challenge, we present a transfer algorithm that extends the image analogies formulation to include an augmented set of photometric and non-photometric guidance channels and, more importantly, adaptively estimate weights for the various candidate channels in a way that matches the characteristics of each individual edit. We demonstrate our algorithm on a variety of complex edit-transfer scenarios for creating high-quality product images

    Artistic Path Space Editing of Physically Based Light Transport

    Get PDF
    Die Erzeugung realistischer Bilder ist ein wichtiges Ziel der Computergrafik, mit Anwendungen u.a. in der Spielfilmindustrie, Architektur und Medizin. Die physikalisch basierte Bildsynthese, welche in letzter Zeit anwendungsübergreifend weiten Anklang findet, bedient sich der numerischen Simulation des Lichttransports entlang durch die geometrische Optik vorgegebener Ausbreitungspfade; ein Modell, welches für übliche Szenen ausreicht, Photorealismus zu erzielen. Insgesamt gesehen ist heute das computergestützte Verfassen von Bildern und Animationen mit wohlgestalteter und theoretisch fundierter Schattierung stark vereinfacht. Allerdings ist bei der praktischen Umsetzung auch die Rücksichtnahme auf Details wie die Struktur des Ausgabegeräts wichtig und z.B. das Teilproblem der effizienten physikalisch basierten Bildsynthese in partizipierenden Medien ist noch weit davon entfernt, als gelöst zu gelten. Weiterhin ist die Bildsynthese als Teil eines weiteren Kontextes zu sehen: der effektiven Kommunikation von Ideen und Informationen. Seien es nun Form und Funktion eines Gebäudes, die medizinische Visualisierung einer Computertomografie oder aber die Stimmung einer Filmsequenz -- Botschaften in Form digitaler Bilder sind heutzutage omnipräsent. Leider hat die Verbreitung der -- auf Simulation ausgelegten -- Methodik der physikalisch basierten Bildsynthese generell zu einem Verlust intuitiver, feingestalteter und lokaler künstlerischer Kontrolle des finalen Bildinhalts geführt, welche in vorherigen, weniger strikten Paradigmen vorhanden war. Die Beiträge dieser Dissertation decken unterschiedliche Aspekte der Bildsynthese ab. Dies sind zunächst einmal die grundlegende Subpixel-Bildsynthese sowie effiziente Bildsyntheseverfahren für partizipierende Medien. Im Mittelpunkt der Arbeit stehen jedoch Ansätze zum effektiven visuellen Verständnis der Lichtausbreitung, die eine lokale künstlerische Einflussnahme ermöglichen und gleichzeitig auf globaler Ebene konsistente und glaubwürdige Ergebnisse erzielen. Hierbei ist die Kernidee, Visualisierung und Bearbeitung des Lichts direkt im alle möglichen Lichtpfade einschließenden "Pfadraum" durchzuführen. Dies steht im Gegensatz zu Verfahren nach Stand der Forschung, die entweder im Bildraum arbeiten oder auf bestimmte, isolierte Beleuchtungseffekte wie perfekte Spiegelungen, Schatten oder Kaustiken zugeschnitten sind. Die Erprobung der vorgestellten Verfahren hat gezeigt, dass mit ihnen real existierende Probleme der Bilderzeugung für Filmproduktionen gelöst werden können

    Imaging light transport at the femtosecond scale

    Get PDF
    Paper, milk, clouds and white paint share a common property: they are opaque disordered media through which light scatters randomly rather than propagating in a straight path. For very thick and turbid media, indeed, light eventually propagates in a ‘diffusive’ way, i.e. similarly to how tea infuses through hot water. Frequently though, a material is neither perfectly opaque nor transparent and the simple diffusion model does not hold. In this work, we developed a novel optical-gating setup that allowed us to observe light transport in scattering media with sub-ps time resolution. An array of unexplored aspects of light propagation emerged from this spatio-temporal description, unveiling transport regimes that were previously inaccessibile due to the extreme time scales involved and the lack of analytical models

    Leaming Visual Appearance: Perception, Modeling and Editing.

    Get PDF
    La apariencia visual determina como entendemos un objecto o imagen, y, por tanto, es un aspecto fundamental en la creación de contenido digital. Es un término general, englobando otros como la apariencia de los materiales, definida como la impresión que tenemos de un material, y la cual supone una interacción física entre luz y materia, y como nuestro sistema visual es capaz de percibirla. Sin embargo, modelar computacionalmente el comportamiento de nuestro sistema visual es una tarea difícil, entre otros motivos porque no existe una teoría definitiva y unificada sobre la percepción visual humana. Además, aunque hemos desarrollado algoritmos capaces de modelar fehacientemente la interacción entre luz y materia, existe una desconexión entre los parámetros físicos que usan estos algoritmos, y los parámetros perceptuales que el sistema visual humano entiende. Esto hace que manipular estas representaciones físicas, y sus interacciones, sea una tarea tediosa y costosa, incluso para usuarios expertos. Esta tesis busca mejorar nuestra comprensión de la percepción de la apariencia de materiales y usar dicho conocimiento para mejorar los algoritmos existentes para la generación de contenido visual. Específicamente, la tesis tiene contribuciones en tres áreas: proponiendo nuevos modelos computacionales para medir la similitud de apariencia; investigando la interacción entre iluminación y geometría; y desarrollando aplicaciones intuitivas para la manipulación de apariencia, en concreto, para el re-iluminado de humanos y para editar la apariencia de materiales.Una primera parte de la tesis explora métodos para medir la similaridad de apariencia. Ser capaces de medir cómo de similares son dos materiales, o imágenes, es un problema clásico en campos de la computación visual como visión por computador o informática gráfica. Abordamos primero el problema de similaridad en la apariencia de materiales. Proponemos un método basado en deep learning que combina imágenes con juicios subjetivos sobre la similitud de materiales, recogidos mediante estudios de usuario. Por otro lado, se explora el problema de la similaridad entre iconos. En este segundo caso, se hace uso de redes neuronales siamesas, y el estilo y la identidad que dan los artistas juega un papel clave en dicha medida de similaridad. La segunda parte avanza en la comprensión de cómo los factores de confusión (confounding factors) afectan a nuestra percepción de la apariencia de los materiales. Dos factores de confusión claves son la geometría de los objetos y la iluminación de la escena. Comenzamos investigando el efecto de dichos factores a la hora de reconocer los materiales a través de diversos experimentos y estudios estadísticos. También investigamos el efecto del movimiento del objeto en la percepción de la apariencia de materiales.En la tercera parte exploramos aplicaciones intuitivas para la manipulación de la apariencia visual. Primero, abordamos el problema de la re-iluminación de humanos. Proponemos una nueva formulación del problema, y basándonos en ella, se diseña y entrena un modelo basado en redes neuronales profundas para re-iluminar una escena. Por último, abordamos el problema de la edición intuitiva de materiales. Para ello, recopilamos juicios humanos sobre la percepción de diferentes atributos y presentamos un modelo, basado en redes neuronales profundas, capaz de editar materiales de forma realista simplemente variando el valor de los atributos recogidos.<br /

    Assistive visual content creation tools via multimodal correlation analysis

    Get PDF
    Visual imagery is ubiquitous in society and can take various formats: from 2D sketches and photographs to photorealistic 3D renderings and animations. The creation processes for each of these mediums have their own unique challenges and methodologies that artists need to overcome and master. For example, for an artist to depict a 3D scene in a 2D drawing they need to understand foreshortening effects to position and scale objects accurately on the page; or, when modeling 3D scenes, artists need to understand how light interacts with objects and materials, to achieve a desired appearance. Many of these tasks can be complex, time-consuming, and repetitive for content creators. The goal of this thesis is to develop tools to alleviate artists from some of these issues and to assist them in the creation process. The key hypothesis is that understanding the relationships between multiple signals present in the scene being created enables such assistive tools. This thesis proposes three assistive tools. First, we present an image degradation model for depth-augmented image editing to help evaluate the quality of the image manipulation. Second, we address the problem of teaching novices to draw objects accurately by automatically generating easy-to-follow sketching tutorials for arbitrary 3D objects. Finally, we propose a method to automatically transfer 2D parametric user edits made to rendered 3D scenes to global variations of the original scene

    Novel Methods and Algorithms for Presenting 3D Scenes

    Get PDF
    In recent years, improvements in the acquisition and creation of 3D models gave rise to an increasing availability of 3D content and to a widening of the audience such content is created for, which brought into focus the need for effective ways to visualize and interact with it. Until recently, the task of virtual inspection of a 3D object or navigation inside a 3D scene was carried out by using human machine interaction (HMI) metaphors controlled through mouse and keyboard events. However, this interaction approach may be cumbersome for the general audience. Furthermore, the inception and spread of touch-based mobile devices, such as smartphones and tablets, redefined the interaction problem entirely, since neither mouse nor keyboards are available anymore. The problem is made even worse by the fact that these devices are typically lower power if compared to desktop machines, while high-quality rendering is a computationally intensive task. In this thesis, we present a series of novel methods for the easy presentation of 3D content both when it is already available in a digitized form and when it must be acquired from the real world by image-based techniques. In the first case, we propose a method which takes as input the 3D scene of interest and an example video, and it automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to replicate an existing video, for example, a video created by a professional animator, on a different 3D scene. In the context of image-based techniques, exploiting the inherent spatial organization of photographs taken for the 3D reconstruction of a scene, we propose an intuitive interface for the smooth stereoscopic navigation of the acquired scene providing an immersive experience without the need of a complete 3D reconstruction. Finally, we propose an interactive framework for improving low-quality 3D reconstructions obtained through image-based reconstruction algorithms. Using few strokes on the input images, the user can specify high-level geometric hints to improve incomplete or noisy reconstructions which are caused by various quite common conditions often arising for objects such as buildings, streets and numerous other human-made functional elements

    On the Modelling of Hyperspectral Light and Skin Interactions and the Simulation of Skin Appearance Changes Due to Tanning

    Get PDF
    The distinctive visual attributes of human skin are largely determined by its interactions with light across different spectral domains. Accordingly, the modelling of these interactions has been the object of extensive investigations in numerous fields for a diverse range of applications. However, only a relatively small portion of these research efforts has been directed toward the comprehensive simulation of hyperspectral light and skin interactions, as well as the associated temporal changes in skin appearance, which can be caused by a myriad of time-dependent photobiological phenomena. In this thesis, we explore this area of research. Initially, we present the first hyperspectral model designed for the predictive rendering of skin appearance attributes in the ultraviolet, visible and infrared domains. We then describe a novel physiologically-based framework for the simulation and visualization of skin tanning dynamics, arguably the most prominent and persistent of such relevant time-dependent phenomena. The proposed model incorporates the intrinsic bio-optical properties of human skin affecting hyperspectral light transport, including the particle nature and distribution patterns of the main light attenuation agents found within the cutaneous tissues. Accordingly, it accounts for phenomena that significantly affect skin spectral signatures within and outside the visible domain, such as detour and sieve effects, which are overlooked by existing skin appearance models. Using a first principles approach, this model computes the surface and subsurface scattering components of skin reflectance taking into account not only the wavelength and the illumination geometry, but also the positional dependence of the reflected light. Hence, the spectral and spatial distributions of light interacting with human skin can be comprehensively represented in terms of hyperspectral reflectance and scattering distribution functions respectively. The proposed tanning simulation framework incorporates algorithms that explicitly account for the connections between spectrally-dependent light stimuli and time-dependent physiological changes occurring within the cutaneous tissues. For example, it utilizes the above hyperspectral model as a modular component to evaluate the wavelength-dependence of the tanning phenomenon. This enables the effective simulation of the skin's main adaptive mechanisms to ultraviolet radiation as well as its responses to distinct light exposure regimes. We demonstrate the predictive capabilities of this framework through quantitative and qualitative comparisons of simulated data with measurements and experimental observations reported in the scientific literature. We also provide image sequences depicting skin appearance changes elicited by time-dependent variations in skin biophysical parameters. The work presented in this thesis is expected to contribute to advances in realistic image synthesis by increasing the spectral and temporal domains of material appearance modelling, and to provide a testbed for interdisciplinary investigations involving the visualization of skin responses to photoinduced processes

    On the Modelling of Hyperspectral Light and Skin Interactions and the Simulation of Skin Appearance Changes Due to Tanning

    Get PDF
    The distinctive visual attributes of human skin are largely determined by its interactions with light across different spectral domains. Accordingly, the modelling of these interactions has been the object of extensive investigations in numerous fields for a diverse range of applications. However, only a relatively small portion of these research efforts has been directed toward the comprehensive simulation of hyperspectral light and skin interactions, as well as the associated temporal changes in skin appearance, which can be caused by a myriad of time-dependent photobiological phenomena. In this thesis, we explore this area of research. Initially, we present the first hyperspectral model designed for the predictive rendering of skin appearance attributes in the ultraviolet, visible and infrared domains. We then describe a novel physiologically-based framework for the simulation and visualization of skin tanning dynamics, arguably the most prominent and persistent of such relevant time-dependent phenomena. The proposed model incorporates the intrinsic bio-optical properties of human skin affecting hyperspectral light transport, including the particle nature and distribution patterns of the main light attenuation agents found within the cutaneous tissues. Accordingly, it accounts for phenomena that significantly affect skin spectral signatures within and outside the visible domain, such as detour and sieve effects, which are overlooked by existing skin appearance models. Using a first principles approach, this model computes the surface and subsurface scattering components of skin reflectance taking into account not only the wavelength and the illumination geometry, but also the positional dependence of the reflected light. Hence, the spectral and spatial distributions of light interacting with human skin can be comprehensively represented in terms of hyperspectral reflectance and scattering distribution functions respectively. The proposed tanning simulation framework incorporates algorithms that explicitly account for the connections between spectrally-dependent light stimuli and time-dependent physiological changes occurring within the cutaneous tissues. For example, it utilizes the above hyperspectral model as a modular component to evaluate the wavelength-dependence of the tanning phenomenon. This enables the effective simulation of the skin's main adaptive mechanisms to ultraviolet radiation as well as its responses to distinct light exposure regimes. We demonstrate the predictive capabilities of this framework through quantitative and qualitative comparisons of simulated data with measurements and experimental observations reported in the scientific literature. We also provide image sequences depicting skin appearance changes elicited by time-dependent variations in skin biophysical parameters. The work presented in this thesis is expected to contribute to advances in realistic image synthesis by increasing the spectral and temporal domains of material appearance modelling, and to provide a testbed for interdisciplinary investigations involving the visualization of skin responses to photoinduced processes
    corecore