6 research outputs found

    Synthesis of environment maps for mixed reality

    Get PDF
    When rendering virtual objects in a mixed reality application, it is helpful to have access to an environment map that captures the appearance of the scene from the perspective of the virtual object. It is straightforward to render virtual objects into such maps, but capturing and correctly rendering the real components of the scene into the map is much more challenging. This information is often recovered from physical light probes, such as reflective spheres or fisheye cameras, placed at the location of the virtual object in the scene. For many application areas, however, real light probes would be intrusive or impractical. Ideally, all of the information necessary to produce detailed environment maps could be captured using a single device. We introduce a method using an RGBD camera and a small fisheye camera, contained in a single unit, to create environment maps at any location in an indoor scene. The method combines the output from both cameras to correct for their limited field of view and the displacement from the virtual object, producing complete environment maps suitable for rendering the virtual content in real time. Our method improves on previous probeless approaches by its ability to recover high-frequency environment maps. We demonstrate how this can be used to render virtual objects which shadow, reflect and refract their environment convincingly

    Dynamic HDR Environment Capture for Mixed Reality

    Get PDF
    Rendering accurate and convincing virtual content into mixed reality (MR) scenes requires detailed illumination information about the real environment. In existing MR systems, this information is often captured using light probes [1, 8, 9, 17, 19--21], or by reconstructing the real environment as a preprocess [31, 38, 54]. We present a method for capturing and updating a HDR radiance map of the real environment and tracking camera motion in real time using a self-contained camera system, without prior knowledge about the real scene. The method is capable of producing plausible results immediately and improving in quality as more of the scene is reconstructed. We demonstrate how this can be used to render convincing virtual objects whose illumination changes dynamically to reflect the changing real environment around them

    Real-time Illumination and Visual Coherence for Photorealistic Augmented/Mixed Reality

    Get PDF
    A realistically inserted virtual object in the real-time physical environment is a desirable feature in augmented reality (AR) applications and mixed reality (MR) in general. This problem is considered a vital research area in computer graphics, a field that is experiencing ongoing discovery. The algorithms and methods used to obtain dynamic and real-time illumination measurement, estimating, and rendering of augmented reality scenes are utilized in many applications to achieve a realistic perception by humans. We cannot deny the powerful impact of the continuous development of computer vision and machine learning techniques accompanied by the original computer graphics and image processing methods to provide a significant range of novel AR/MR techniques. These techniques include methods for light source acquisition through image-based lighting or sampling, registering and estimating the lighting conditions, and composition of global illumination. In this review, we discussed the pipeline stages with the details elaborated about the methods and techniques that contributed to the development of providing a photo-realistic rendering, visual coherence, and interactive real-time illumination results in AR/MR

    Multi-User 3D Augmented Reality Anwendung für die gemeinsame Interaktion mit virtuellen 3D Objekten

    Get PDF
    This thesis covers the development of a network supported multi user augmented reality application for mobile devices. A presenter can load 3D models dynamically, display them on an augmented reality 2D tracker and is capable of manipulating certain single objects. These manipulations are already well-defined in advance through the hierarchy of the 3D object. Executable manipulations are translation, rotation, scaling and the change of materials. Any number of spectators can follow the presentation with their own device from a point of view of individual choice. If the data of the model is not present on their own device it will automatically be transferred from the presenter via network. Thoughts that were made in advance are described, followed by the details of implementation and the occured problems as well as chosen solutions. With the prototype a user study was conducted to define guidelines for the choice of different kinds of lighting for certain applications. The choice is between static, dynamic and combined lighting. Additionally the general usability of the app is evaluated in the study

    Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360-degree camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum

    LightSkin: Globale Echtzeitbeleuchtung für Virtual und Augmented Reality

    Get PDF
    In nature, each interaction of light is bound to a global context. Thus, each observable natural light phenomenon is the result of global illumination. It is based on manifold laws of absorption, reflection, and refraction, which are mostly too complex to simulate given the real-time constraints of interactive applications. Therefore, many interactive applications do not support the simulation of those global illumination phenomena yet, which results in unrealistic and synthetic-looking renderings. This unrealistic rendering becomes especially a problem in the context of virtual reality and augmented reality applications, where the user should experience the simulation as realistic as possible. In this thesis we present a novel approach called LightSkin that calculates global illumination phenomena in real-time. The approach was especially developed for virtual reality and augmented reality applications satisfying several constraints coming along with those applications. As part of the approach we introduce a novel interpolation scheme, which is capable to calculate realistic indirect illumination results based on a few number of supporting points, distributed on model surfaces. Each supporting point creates its own proxy light sources, which are used to represent the whole indirect illumination for this point in a compact manner. These proxy light sources are then linearly interpolated to obtain dense results for the entire visible scene. Due to an efficient implementation on GPU, the method is very fast supporting complex and dynamic scenes. Based on the approach, it is possible to simulate diffuse and glossy indirect reflections, soft shadows, and multiple subsurface scattering phenomena without neglecting filigree surface details. Furthermore, the method can be adapted to augmented reality applications providing mutual global illumination effects between dynamic real and virtual objects using an active RGB-D sensor device. In contrast to existing interactive global illumination approaches, our approach supports all kinds of animations, handling them more efficient, not requiring extra calculations or leading to disturbing temporal artifacts. This thesis contains all information needed to understand, implement, and evaluate the novel LightSkin approach and also provides a comprehensive overview of the related field of research.In der Natur ist jede Interaktion des Lichts mit Materie in einen globalen Kontext eingebunden, weswegen alle natürlichen Beleuchtungsphänomene in unserer Umwelt das Resultat globaler Beleuchtung sind. Diese basiert auf der Anwendung mannigfaltiger Absorptions-, Reflexions- und Brechungsgesetze, deren Simulation so komplex ist, dass interaktive Anwendungen diese nicht in wenigen Millisekunden berechnen können. Deshalb wurde bisher in vielen interaktiven Systemen auf die Abbildung von solchen globalen Beleuchtungsphänomenen verzichtet, was jedoch zu einer unrealistischen und synthetisch-wirkenden Darstellung führte. Diese unrealistische Darstellung ist besonders für die Anwendungsfelder Virtual Reality und Augmented Reality, bei denen der Nutzer eine möglichst realitätsnahe Simulation erfahren soll, ein gewichtiger Nachteil. In dieser Arbeit wird das LightSkin-Verfahren vorgestellt, das es erlaubt, globale Beleuchtungsphänomene in einer Echtzeitanwendung darzustellen. Das Verfahren wurde speziell für die Anwendungsfelder Virtual Reality und Augmented Reality entwickelt und erfüllt spezifische Anforderungen, die diese an eine Echtzeitanwendung stellen. Bei dem Verfahren wird das indirekte Licht durch eine geringe Anzahl von Punktlichtquellen (Proxy-Lichtquellen) repräsentiert, die für eine lose Menge von Oberflächenpunkten (Caches) berechnet und anschließend über die komplette sichtbare Szene interpoliert werden. Diese neue Form der Repräsentation der indirekten Beleuchtung erlaubt eine effiziente Berechnung von diffusen und glänzenden indirekten Reflexionen, die Abbildung von weichen Schatten und die Simulation von Multiple-Subsurface-Scattering-Effekten in Echtzeit für komplexe und voll dynamische Szenen. Ferner wird gezeigt, wie das Verfahren modifiziert werden kann, um globale Lichtwechselwirkungen zwischen realen und virtuellen Objekten in einer Augmented-Reality-Anwendung zu simulieren. Im Gegensatz zu den meisten existierenden Echtzeitverfahren zur Simulation von globalen Beleuchtungseffekten benötigt der hier vorgestellte Ansatz keine aufwändigen zusätzlichen Berechnungen bei Animationen und erzeugt darüber hinaus für diese keine visuellen Artefakte. Diese Arbeit enthält alle Informationen, die zum Verständnis, zur Implementierung und zur Evaluation des LightSkin-Verfahrens benötigt werden und gibt darüber hinaus einen umfassenden Über- blick über das Forschungsfeld
    corecore