12 research outputs found
Dynamic Illumination for Augmented Reality with Real-Time Interaction
Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost
Learning Lightprobes for Mixed Reality Illumination
This paper presents the first photometric registration pipeline for Mixed Reality based on high quality illumination estimation by convolutional neural network (CNN) methods. For easy adaptation and deployment of the system, we train the CNN using purely synthetic images and apply them to real image data. To keep the pipeline accurate and efficient, we propose to fuse the light estimation results from multiple CNN instances, and we show an approach for caching estimates over time. For optimal performance, we furthermore explore multiple strategies for the CNN training. Experimental results show that the proposed method yields highly accurate estimates for photo-realistic augmentations
Sequential Monte Carlo Instant Radiosity
Instant Radiosity and its derivatives are interactive methods for efficiently estimating global (indirect) illumination. They represent the last indirect bounce of illumination before the camera as the composite radiance field emitted by a set of virtual point light sources (VPLs). In complex scenes, current algorithms suffer from a difficult combination of two issues: it remains a challenge to distribute VPLs in a manner that simultaneously gives a high-quality indirect illumination solution for each frame, and does so in a temporally coherent manner. We address both issues by building, and maintaining over time, an adaptive and temporally coherent distribution of VPLs in locations where they bring indirect light to the image. We introduce a novel heuristic sampling method that strives to only move as few of the VPLs between frames as possible. The result is, to the best of our knowledge, the first interactive global illumination algorithm that works in complex, highly-occluded scenes, suffers little from temporal flickering, supports moving cameras and light sources, and is output-sensitive in the sense that it places VPLs in locations that matter most to the final result
Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses
Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow samples observed in the camera frame, this method efficiently recovers deformed shadow appearance. In this manuscript, we introduce a light-estimation approach that enables light-source detection using flat Fresnel lenses that allow this method to work without a set of pre-established conditions. We extend the adeptness of this approach by handling scenarios with multiple receiver surfaces and a non-grounded occluder with high accuracy. Results are presented on a range of objects, deformations, and illumination conditions in real-time Augmented Reality (AR) on a mobile device. We demonstrate the practical application of the method in generating otherwise laborious in-betweening frames for 3D printed stop-motion animatio
Sequential Monte Carlo Instant Radiosity
Instant Radiosity and its derivatives are interactive methods for efficiently estimating global (indirect) illumination. They represent the last indirect bounce of illumination before the camera as the composite radiance field emitted by a set of virtual point light sources (VPLs). In complex scenes, current algorithms suffer from a difficult combination of two issues: it remains a challenge to distribute VPLs in a manner that simultaneously gives a high-quality indirect illumination solution for each frame, and to do so in a temporally coherent manner. We address both issues by building, and maintaining overtime, an adaptive and temporally coherent distribution of VPLs in locations where they bring indirect light to the image. We introduce a novel heuristic sampling method that strives to only move as few of the VPLs between frames as possible. The result is, to the best of our knowledge, the first interactive global illumination algorithm that works in complex, highly-occluded scenes, suffers little from temporal flickering, supports moving cameras and light sources, and is output-sensitive in the sense that it places VPLs in locations that matter most to the final result
Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments
Indiana University-Purdue University Indianapolis (IUPUI)Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360-degree camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum
Intermediated reality
Real-time solutions to reducing the gap between virtual and physical worlds for photorealistic interactive Augmented Reality (AR) are presented. First, a method of texture deformation with image inpainting, provides a proof of concept to convincingly re-animate fixed physical objects through digital displays with seamless visual appearance. This, in combination with novel methods for image-based retargeting of real shadows to deformed virtual poses and environment illumination estimation using in conspicuous flat Fresnel lenses, brings real-world props to life in compelling, practical ways. Live AR animation capability provides the key basis for interactive facial performance capture driven deformation of real-world physical facial props. Therefore, Intermediated Reality (IR) is enabled; a tele-present AR framework that drives mediated communication and collaboration for multiple users through the remote possession of toys brought to life.This IR framework provides the foundation of prototype applications in physical avatar chat communication, stop-motion animation movie production, and immersive video games. Specifically, a new approach to reduce the number of physical configurations needed for a stop-motion animation movie by generating the in-between frames digitally in AR is demonstrated. AR-generated frames preserve its natural appearance and achieve smooth transitions between real-world keyframes and digitally generated in-betweens. Finally, the methods integrate across the entire Reality-Virtuality Continuum to target new game experiences called Multi-Reality games. This gaming experience makes an evolutionary step toward the convergence of real and virtual game characters for visceral digital experiences
LightSkin: Globale Echtzeitbeleuchtung für Virtual und Augmented Reality
In nature, each interaction of light is bound to a global context. Thus,
each observable natural light phenomenon is the result of global
illumination. It is based on manifold laws of absorption, reflection, and
refraction, which are mostly too complex to simulate given the real-time
constraints of interactive applications. Therefore, many interactive
applications do not support the simulation of those global illumination
phenomena yet, which results in unrealistic and synthetic-looking
renderings. This unrealistic rendering becomes especially a problem in the
context of virtual reality and augmented reality applications, where the
user should experience the simulation as realistic as possible. In this
thesis we present a novel approach called LightSkin that calculates global
illumination phenomena in real-time. The approach was especially developed
for virtual reality and augmented reality applications satisfying several
constraints coming along with those applications. As part of the approach
we introduce a novel interpolation scheme, which is capable to calculate
realistic indirect illumination results based on a few number of supporting
points, distributed on model surfaces. Each supporting point creates its
own proxy light sources, which are used to represent the whole indirect
illumination for this point in a compact manner. These proxy light sources
are then linearly interpolated to obtain dense results for the entire
visible scene. Due to an efficient implementation on GPU, the method is
very fast supporting complex and dynamic scenes. Based on the approach, it
is possible to simulate diffuse and glossy indirect reflections, soft
shadows, and multiple subsurface scattering phenomena without neglecting
filigree surface details. Furthermore, the method can be adapted to
augmented reality applications providing mutual global illumination effects
between dynamic real and virtual objects using an active RGB-D sensor
device. In contrast to existing interactive global illumination approaches,
our approach supports all kinds of animations, handling them more
efficient, not requiring extra calculations or leading to disturbing
temporal artifacts. This thesis contains all information needed to
understand, implement, and evaluate the novel LightSkin approach and also
provides a comprehensive overview of the related field of research.In der Natur ist jede Interaktion des Lichts mit Materie in einen globalen
Kontext eingebunden, weswegen alle natürlichen Beleuchtungsphänomene in
unserer Umwelt das Resultat globaler Beleuchtung sind. Diese basiert auf
der Anwendung mannigfaltiger Absorptions-, Reflexions- und
Brechungsgesetze, deren Simulation so komplex ist, dass interaktive
Anwendungen diese nicht in wenigen Millisekunden berechnen können. Deshalb
wurde bisher in vielen interaktiven Systemen auf die Abbildung von solchen
globalen Beleuchtungsphänomenen verzichtet, was jedoch zu einer
unrealistischen und synthetisch-wirkenden Darstellung führte. Diese
unrealistische Darstellung ist besonders für die Anwendungsfelder Virtual
Reality und Augmented Reality, bei denen der Nutzer eine möglichst
realitätsnahe Simulation erfahren soll, ein gewichtiger Nachteil. In dieser
Arbeit wird das LightSkin-Verfahren vorgestellt, das es erlaubt, globale
Beleuchtungsphänomene in einer Echtzeitanwendung darzustellen. Das
Verfahren wurde speziell für die Anwendungsfelder Virtual Reality und
Augmented Reality entwickelt und erfüllt spezifische Anforderungen, die
diese an eine Echtzeitanwendung stellen. Bei dem Verfahren wird das
indirekte Licht durch eine geringe Anzahl von Punktlichtquellen
(Proxy-Lichtquellen) repräsentiert, die für eine lose Menge von
Oberflächenpunkten (Caches) berechnet und anschließend über die komplette
sichtbare Szene interpoliert werden. Diese neue Form der Repräsentation der
indirekten Beleuchtung erlaubt eine effiziente Berechnung von diffusen und
glänzenden indirekten Reflexionen, die Abbildung von weichen Schatten und
die Simulation von Multiple-Subsurface-Scattering-Effekten in Echtzeit für
komplexe und voll dynamische Szenen. Ferner wird gezeigt, wie das Verfahren
modifiziert werden kann, um globale Lichtwechselwirkungen zwischen realen
und virtuellen Objekten in einer Augmented-Reality-Anwendung zu simulieren.
Im Gegensatz zu den meisten existierenden Echtzeitverfahren zur Simulation
von globalen Beleuchtungseffekten benötigt der hier vorgestellte Ansatz
keine aufwändigen zusätzlichen Berechnungen bei Animationen und erzeugt
darüber hinaus für diese keine visuellen Artefakte. Diese Arbeit enthält
alle Informationen, die zum Verständnis, zur Implementierung und zur
Evaluation des LightSkin-Verfahrens benötigt werden und gibt darüber hinaus
einen umfassenden Über- blick über das Forschungsfeld