2,946 research outputs found

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    Object-based Illumination Estimation with Rendering-aware Neural Networks

    Full text link
    We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas. Conventional inverse rendering is too computationally demanding for real-time applications, and the performance of purely learning-based techniques may be limited by the meager input data available from individual objects. To address these issues, we propose an approach that takes advantage of physical principles from inverse rendering to constrain the solution, while also utilizing neural networks to expedite the more computationally expensive portions of its processing, to increase robustness to noisy input data as well as to improve temporal and spatial stability. This results in a rendering-aware system that estimates the local illumination distribution at an object with high accuracy and in real time. With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene, leading to improved realism.Comment: ECCV 202

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Projector-Based Augmentation

    Get PDF
    Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces

    Static scene illumination estimation from video with applications

    Get PDF
    We present a system that automatically recovers scene geometry and illumination from a video, providing a basis for various applications. Previous image based illumination estimation methods require either user interaction or external information in the form of a database. We adopt structure-from-motion and multi-view stereo for initial scene reconstruction, and then estimate an environment map represented by spherical harmonics (as these perform better than other bases). We also demonstrate several video editing applications that exploit the recovered geometry and illumination, including object insertion (e.g., for augmented reality), shadow detection, and video relighting

    TangiPaint: Interactive tangible media

    Get PDF
    Currently, there is a wide disconnection between the real and virtual worlds in computer graphics. Art created with textured paints on canvases have visual effects which naturally supplement simple color. Real paint exhibits shadows and highlights, which change in response to viewing and lighting directions. The colors interact with this environment and can produce very noticeable effects. Additionally, the traditional means of human-computer interaction using a keyboard and mouse is unnatural and inefficient---gestures and actions are not performed on the objects themselves. These visual effects and natural interactions are missing from digital media in the virtual world. The absence of these visual characteristics disconnects users from their content. Our research looks into simulating these missing pieces and reconnecting users. TangiPaint is an interactive, tangible application for creating and exploring digital media. It gives the experience of working with real materials, such as oil paints and textured canvases, on a digital display. TangiPaint implements natural gestures and allows users to directly interact with their work. The Tangible Display technology allows users to tilt and reorient the device and screen to see the subtle gloss, shadow, and impasto lighting effects of the simulated surface. To simulate realistic lighting effects we use a Ward BRDF illumination model. This model is implemented as an OpenGL shader program. Our system tracks the texture and relief of a piece of art by saving topographical information. We implement height fields, normal vectors, and parameter maps to store this information. These textures are submitted to the lighting model that renders a final product. TangiPaint builds on previous work and applications in this area, but is the first to integrate these aspects into a single software application. The system is entirely self-contained and implemented on the Apple iOS platforms, the iPhone, iPad, and iPod Touch. No additional hardware is required and the interface is easy to learn and use. TangiPaint is a step in the direction of interactive digital art media that looks and behaves like real materials
    • …
    corecore