6 research outputs found

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    An Empirical Evaluation of the Performance of Real-Time Illumination Approaches: Realistic Scenes in Augmented Reality

    Get PDF
    Augmented, Virtual, and Mixed Reality (AR/VR/MR) systems have been developed in general, with many of these applications having accomplished significant results, rendering a virtual object in the appropriate illumination model of the real environment is still under investigation. The entertainment industry has presented an astounding outcome in several media form, albeit the rendering process has mostly been done offline. The physical scene contains the illumination information which can be sampled and then used to render the virtual objects in real-time for realistic scene. In this paper, we evaluate the accuracy of our previous and current developed systems that provide real-time dynamic illumination for coherent interactive augmented reality based on the virtual object’s appearance in association with the real world and related criteria. The system achieves that through three simultaneous aspects. (1) The first is to estimate the incident light angle in the real environment using a live-feed 360∘ camera instrumented on an AR device. (2) The second is to simulate the reflected light using two routes: (a) global cube map construction and (b) local sampling. (3) The third is to define the shading properties for the virtual object to depict the correct lighting assets and suitable shadowing imitation. Finally, the performance efficiency is examined in both routes of the system to reduce the general cost. Also, The results are evaluated through shadow observation and user study

    Augmented Reality Using Video from a Stationary Camera

    Get PDF
    Tato práce se venuje konceptu rozšírené reality nad obrazem ze stacionární kamery a jejím cílem je nabídnout prototyp interaktivního editoru, díky kterému je možné ve scéne vytváret virtuální objekty, jako kompenzaci nedostatecných možností detekce nepohybujících se objektu v obraze ze stacionárních kamer. Výsledkem práce je editor scény vyvinutý v herním enginu Unity, jenž nabízí možnosti pro tvorbu jednoduchých objektu a jehož výstup je využitelný v projektech Unity.The goal of this thesis is to provide an interactive scene editor prototype as a way to compensate for the limited static object recognition capabilities of the fixed-camera-based approaches to augmented reality. The final result is an editor developed in the Unity game engine, which can be used to create simple objects and output of which is Unity project compatible.

    Real-time Illumination and Visual Coherence for Photorealistic Augmented/Mixed Reality

    Get PDF
    A realistically inserted virtual object in the real-time physical environment is a desirable feature in augmented reality (AR) applications and mixed reality (MR) in general. This problem is considered a vital research area in computer graphics, a field that is experiencing ongoing discovery. The algorithms and methods used to obtain dynamic and real-time illumination measurement, estimating, and rendering of augmented reality scenes are utilized in many applications to achieve a realistic perception by humans. We cannot deny the powerful impact of the continuous development of computer vision and machine learning techniques accompanied by the original computer graphics and image processing methods to provide a significant range of novel AR/MR techniques. These techniques include methods for light source acquisition through image-based lighting or sampling, registering and estimating the lighting conditions, and composition of global illumination. In this review, we discussed the pipeline stages with the details elaborated about the methods and techniques that contributed to the development of providing a photo-realistic rendering, visual coherence, and interactive real-time illumination results in AR/MR

    Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360-degree camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum

    LightSkin: Globale Echtzeitbeleuchtung für Virtual und Augmented Reality

    Get PDF
    In nature, each interaction of light is bound to a global context. Thus, each observable natural light phenomenon is the result of global illumination. It is based on manifold laws of absorption, reflection, and refraction, which are mostly too complex to simulate given the real-time constraints of interactive applications. Therefore, many interactive applications do not support the simulation of those global illumination phenomena yet, which results in unrealistic and synthetic-looking renderings. This unrealistic rendering becomes especially a problem in the context of virtual reality and augmented reality applications, where the user should experience the simulation as realistic as possible. In this thesis we present a novel approach called LightSkin that calculates global illumination phenomena in real-time. The approach was especially developed for virtual reality and augmented reality applications satisfying several constraints coming along with those applications. As part of the approach we introduce a novel interpolation scheme, which is capable to calculate realistic indirect illumination results based on a few number of supporting points, distributed on model surfaces. Each supporting point creates its own proxy light sources, which are used to represent the whole indirect illumination for this point in a compact manner. These proxy light sources are then linearly interpolated to obtain dense results for the entire visible scene. Due to an efficient implementation on GPU, the method is very fast supporting complex and dynamic scenes. Based on the approach, it is possible to simulate diffuse and glossy indirect reflections, soft shadows, and multiple subsurface scattering phenomena without neglecting filigree surface details. Furthermore, the method can be adapted to augmented reality applications providing mutual global illumination effects between dynamic real and virtual objects using an active RGB-D sensor device. In contrast to existing interactive global illumination approaches, our approach supports all kinds of animations, handling them more efficient, not requiring extra calculations or leading to disturbing temporal artifacts. This thesis contains all information needed to understand, implement, and evaluate the novel LightSkin approach and also provides a comprehensive overview of the related field of research.In der Natur ist jede Interaktion des Lichts mit Materie in einen globalen Kontext eingebunden, weswegen alle natürlichen Beleuchtungsphänomene in unserer Umwelt das Resultat globaler Beleuchtung sind. Diese basiert auf der Anwendung mannigfaltiger Absorptions-, Reflexions- und Brechungsgesetze, deren Simulation so komplex ist, dass interaktive Anwendungen diese nicht in wenigen Millisekunden berechnen können. Deshalb wurde bisher in vielen interaktiven Systemen auf die Abbildung von solchen globalen Beleuchtungsphänomenen verzichtet, was jedoch zu einer unrealistischen und synthetisch-wirkenden Darstellung führte. Diese unrealistische Darstellung ist besonders für die Anwendungsfelder Virtual Reality und Augmented Reality, bei denen der Nutzer eine möglichst realitätsnahe Simulation erfahren soll, ein gewichtiger Nachteil. In dieser Arbeit wird das LightSkin-Verfahren vorgestellt, das es erlaubt, globale Beleuchtungsphänomene in einer Echtzeitanwendung darzustellen. Das Verfahren wurde speziell für die Anwendungsfelder Virtual Reality und Augmented Reality entwickelt und erfüllt spezifische Anforderungen, die diese an eine Echtzeitanwendung stellen. Bei dem Verfahren wird das indirekte Licht durch eine geringe Anzahl von Punktlichtquellen (Proxy-Lichtquellen) repräsentiert, die für eine lose Menge von Oberflächenpunkten (Caches) berechnet und anschließend über die komplette sichtbare Szene interpoliert werden. Diese neue Form der Repräsentation der indirekten Beleuchtung erlaubt eine effiziente Berechnung von diffusen und glänzenden indirekten Reflexionen, die Abbildung von weichen Schatten und die Simulation von Multiple-Subsurface-Scattering-Effekten in Echtzeit für komplexe und voll dynamische Szenen. Ferner wird gezeigt, wie das Verfahren modifiziert werden kann, um globale Lichtwechselwirkungen zwischen realen und virtuellen Objekten in einer Augmented-Reality-Anwendung zu simulieren. Im Gegensatz zu den meisten existierenden Echtzeitverfahren zur Simulation von globalen Beleuchtungseffekten benötigt der hier vorgestellte Ansatz keine aufwändigen zusätzlichen Berechnungen bei Animationen und erzeugt darüber hinaus für diese keine visuellen Artefakte. Diese Arbeit enthält alle Informationen, die zum Verständnis, zur Implementierung und zur Evaluation des LightSkin-Verfahrens benötigt werden und gibt darüber hinaus einen umfassenden Über- blick über das Forschungsfeld
    corecore