7,981 research outputs found

    Calipso: Physics-based Image and Video Editing through CAD Model Proxies

    Get PDF
    We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso's physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.Comment: 11 page

    Efficient multi-bounce lightmap creation using GPU forward mapping

    Get PDF
    Computer graphics can nowadays produce images in realtime that are hard to distinguish from photos of a real scene. One of the most important aspects to achieve this is the interaction of light with materials in the virtual scene. The lighting computation can be separated in two different parts. The first part is concerned with the direct illumination that is applied to all surfaces lit by a light source; algorithms related to this have been greatly improved over the last decades and together with the improvements of the graphics hardware can now produce realistic effects. The second aspect is about the indirect illumination which describes the multiple reflections of light from each surface. In reality, light that hits a surface is never fully absorbed, but instead reflected back into the scene. And even this reflected light is then reflected again and again until its energy is depleted. These multiple reflections make indirect illumination very computationally expensive. The first problem regarding indirect illumination is therefore, how it can be simplified to compute it faster. Another question concerning indirect illumination is, where to compute it. It can either be computed in the fixed image that is created when rendering the scene or it can be stored in a light map. The drawback of the first approach is, that the results need to be recomputed for every frame in which the camera changed. The second approach, on the other hand, is already used for a long time. Once a static scene has been set up, the lighting situation is computed regardless of the time it takes and the result is then stored into a light map. This is a texture atlas for the scene in which each surface point in the virtual scene has exactly one surface point in the 2D texture atlas. When displaying the scene with this approach, the indirect illumination does not need to be recomputed, but is simply sampled from the light map. The main contribution of this thesis is the development of a technique that computes the indirect illumination solution for a scene at interactive rates and stores the result into a light atlas for visualizing it. To achieve this, we overcome two main obstacles. First, we need to be able to quickly project data from any given camera configuration into the parts of the texture that are currently used for visualizing the 3D scene. Since our approach for computing and storing indirect illumination requires a huge amount of these projections, it needs to be as fast as possible. Therefore, we introduce a technique that does this projection entirely on the graphics card with a single draw call. Second, the reflections of light into the scene need to be computed quickly. Therefore, we separate the computation into two steps, one that quickly approximates the spreading of the light into the scene and a second one that computes the visually smooth final result using the aforementioned projection technique. The final technique computes the indirect illumination at interactive rates even for big scenes. It is furthermore very flexible to let the user choose between high quality results or fast computations. This allows the method to be used for quickly editing the lighting situation with high speed previews and then computing the final result in perfect quality at still interactive rates. The technique introduced for projecting data into the texture atlas is in itself highly flexible and also allows for fast painting onto objects and projecting data onto it, considering all perspective distortions and self-occlusions

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines

    Virtual restoration and visualization changes through light: A review

    Get PDF
    This article belongs to the Special Issue Optical Technologies Applied to Cultural Heritage.The virtual modification of the appearance of an object using lighting technologies has become very important in recent years, since the projection of light on an object allows us to alter its appearance in a virtual and reversible way. Considering the limitation of non-contact when analysing a work of art, these optical techniques have been used in fields of restoration of cultural heritage, allowing us to visualize the work as it was conceived by its author, after a process of acquisition and treatment of the image. Furthermore, the technique of altering the appearance of objects through the projection of light has been used in projects with artistic or even educational purposes. This review has treated the main studies of light projection as a technique to alter the appearance of objects, emphasizing the calibration methods used in each study, taking into account the importance of a correct calibration between devices to carry out this technology. In addition, since the described technique consists of projecting light, and one of the applications is related to cultural heritage, those studies that carry out the design and optimization of lighting systems will be described for a correct appreciation of the works of art, without altering its state of conservationThis work has been funded by project number RTI2018-097633-A-I00 of the Ministry of Science and Innovation of Spain, entitled 'Photonic restoration applied to cultural heritage: Application to Dali's painting: Two Figures.

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Space time pixels

    Get PDF
    This paper reports the design of a networked system, the aim of which is to provide an intermediate virtual space that will establish a connection and support interaction between multiple participants in two distant physical spaces. The intention of the project is to explore the potential of the digital space to generate original social relationships between people that their current (spatial or social) position can difficultly allow the establishment of innovative connections. Furthermore, to explore if digital space can sustain, in time, low-level connections like these, by balancing between the two contradicting needs of communication and anonymity. The generated intermediate digital space is a dynamic reactive environment where time and space information of two physical places is superimposed to create a complex common ground where interaction can take place. It is a system that provides awareness of activity in a distant space through an abstract mutable virtual environment, which can be perceived in several different ways – varying from a simple dynamic background image to a common public space in the junction of two private spaces or to a fully opened window to the other space – according to the participants will. The thesis is that the creation of an intermediary environment that operates as an activity abstraction filter between several users, and selectively communicates information, could give significance to the ambient data that people unconsciously transmit to others when co-existing. It can therefore generate a new layer of connections and original interactivity patterns; in contrary to a straight-forward direct real video and sound system, that although it is functionally more feasible, it preserves the existing social constraints that limit interaction into predefined patterns

    Interaktion mit Medienfassaden : Design und Implementierung interaktiver Systeme fĂŒr große urbane Displays

    Get PDF
    Media facades are a prominent example of the digital augmentation of urban spaces. They denote the concept of turning the surface of a building into a large-scale urban screen. Due to their enormous size, they require interaction at a distance and they have a high level of visibility. Additionally, they are situated in a highly dynamic urban environment with rapidly changing conditions, which results in settings that are neither comparable, nor reproducible. Altogether, this makes the development of interactive media facade installations a challenging task. This thesis investigates the design of interactive installations for media facades holistically. A theoretical analysis of the design space for interactive installations for media facades is conducted to derive taxonomies to put media facade installations into context. Along with this, a set of observations and guidelines is provided to derive properties of the interaction from the technical characteristics of an interactive media facade installation. This thesis further provides three novel interaction techniques addressing the form factor and resolution of the facade, without the need for additionally instrumenting the space around the facades. The thesis contributes to the design of interactive media facade installations by providing a generalized media facade toolkit for rapid prototyping and simulating interactive media facade installations, independent of the media facade’s size, form factor, technology and underlying hardware.Die wachsende Zahl an Medienfassenden ist ein eindrucksvolles Beispiel fĂŒr die digitale Erweiterung des öffentlichen Raums. Medienfassaden beschreiben die Möglichkeit, die OberflĂ€che eines GebĂ€udes in ein digitales Display zu wandeln. Ihre GrĂ¶ĂŸe erfordert Interaktion aus einer gewissen Distanz und fĂŒhrt zu einer großen Sichtbarkeit der dargestellten Inhalte. Medienfassaden-Installationen sind bedingt durch ihre dynamische Umgebung nur schwerlich vergleich- und reproduzierbar. All dies macht die Entwicklung von Installationen fĂŒr Medienfassaden zu einer großen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Entwicklung interaktiver Installationen fĂŒr Medienfassaden. Es wird eine theoretische Analyse des Design-Spaces interaktiver Medienfassaden-Installationen durchgefĂŒhrt und es werden Taxonomien entwickelt, die Medienfassaden-Installationen in Bezug zueinander setzen. In diesem Zusammenhang werden ausgehend von den technischen Charakteristika Eigenschaften der Interaktion erarbeitet. Zur Interaktion mit Medienfassaden werden drei neue Interaktionstechniken vorgestellt, die Form und Auflösung der Fassade berĂŒcksichtigen, ohne notwendigerweise die Umgebung der Fassade zu instrumentieren. Die Ergebnisse dieser Arbeit verbessern darĂŒber hinaus die Entwicklung von Installationen fĂŒr Medienfassaden, indem ein einheitliches Medienfassaden-Toolkit zum Rapid-Prototyping und zur Simulation interaktiver Installationen vorgestellt wird, das unabhĂ€ngig von GrĂ¶ĂŸe und Form der Medienfassade sowie unabhĂ€ngig von der verwendeten Technologie und der zugrunde liegenden Hardware ist

    Cuboid-maps for indoor illumination modeling and augmented reality rendering

    Get PDF
    This thesis proposes a novel approach for indoor scene illumination modeling and augmented reality rendering. Our key observation is that an indoor scene is well represented by a set of rectangular spaces, where important illuminants reside on their boundary faces, such as a window on a wall or a ceiling light. Given a perspective image or a panorama and detected rectangular spaces as inputs, we estimate their cuboid shapes, and infer illumination components for each face of the cuboids by a simple convolutional neural architecture. The process turns an image into a set of cuboid environment maps, each of which is a simple extension of a traditional cube-map. For augmented reality rendering, we simply take a linear combination of inferred environment maps and an input image, producing surprisingly realistic illumination effects. This approach is simple and efficient, avoids flickering, and achieves quantitatively more accurate and qualitatively more realistic effects than competing substantially more complicated systems

    Applications of Surface Metrology to Issues in Art

    Get PDF
    This work investigates applying surface metrology techniques to issues in art, such as, restoration and identification. It seeks possible areas of collaboration between the Worcester Art Museum and Worcester Polytechnic Institute\u27s Surface Metrology Lab. Surface metrology is the study of measurement and analysis of surface textures, or roughness. We have completed an experiment to demonstrate that scanning laser microscopy and scale-sensitve fractal analysis used on paintings can discriminate brush types and paints

    The Experience of a Lifetime: Interactive Digital Experience Beyond the Screen

    Get PDF
    Screen-based digital experience design is blooming among the local businesses in Metro Vancouver along with the increased pervasiveness of information technologies, new digital products in contemporary society. However, there are significantly fewer cases and related businesses around tangible interactive digital experience in which tangible objects and physical spaces replace the screen as the site of interaction. This thesis project aims to explore the specialties of the tangible interactive experience compared to the digital experience on the screen or in the virtual space. Additionally, the author investigates how to leverage user experience design methodologies in the process of designing an experimental interactive experience. In this practice-based exploration, the author prototyped four interactive digital experiences using different interactive technologies and tools tailored to different use case scenarios: 1. an interactive offline retail experience, 2. a “magical” and playful painting, 3. a room-scale interactive installation, and 4. an immersive meditation activity. These projects illustrate and explore the implementation of tangible interactions into digital experience design. During the development process, the author applied several user experience design methodologies in the projects – including field research, interviews, questionnaires, and design probes – to develop a workable framework designing tangible interactive experiences throughout the research project. The author aims to outline key implications of applying principles of user experience design to the field of tangible interactive environments. In the process, the author argues that tangible interactive design is indispensable in a successful and engaging digital experience, and thus worth investing in and exploring further in Vancouver’s marketplace
    • 

    corecore