737 research outputs found
Polarization-Based Illumination Detection for Coherent Augmented Reality Scene Rendering in Dynamic Environments
A virtual object that is integrated into the real world in a perceptually coherent manner using the physical illumination information in the current environment is still under development. Several researchers investigated the problem producing a high-quality result; however, pre-computation and offline availability of resources were the essential assumption upon which the system relied. In this paper, we propose a novel and robust approach to identifying the incident light in the scene using the polarization properties of the light wave and using this information to produce a visually coherent augmented reality within a dynamic environment. This approach is part of a complete system which has three simultaneous components that run in real-time: (i) the detection of the incident light angle, (ii) the estimation of the reflected light, and (iii) the creation of the shading properties which are required to provide any virtual object with the detected lighting, reflected shadows, and adequate materials. Finally, the system performance is analyzed where our approach has reduced the overall computational cost
Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments
Indiana University-Purdue University Indianapolis (IUPUI)Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360-degree camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum
An Empirical Evaluation of the Performance of Real-Time Illumination Approaches: Realistic Scenes in Augmented Reality
Augmented, Virtual, and Mixed Reality (AR/VR/MR) systems have been developed in general, with many of these applications having accomplished significant results, rendering a virtual object in the appropriate illumination model of the real environment is still under investigation. The entertainment industry has presented an astounding outcome in several media form, albeit the rendering process has mostly been done offline. The physical scene contains the illumination information which can be sampled and then used to render the virtual objects in real-time for realistic scene. In this paper, we evaluate the accuracy of our previous and current developed systems that provide real-time dynamic illumination for coherent interactive augmented reality based on the virtual object’s appearance in association with the real world and related criteria. The system achieves that through three simultaneous aspects. (1) The first is to estimate the incident light angle in the real environment using a live-feed 360∘ camera instrumented on an AR device. (2) The second is to simulate the reflected light using two routes: (a) global cube map construction and (b) local sampling. (3) The third is to define the shading properties for the virtual object to depict the correct lighting assets and suitable shadowing imitation. Finally, the performance efficiency is examined in both routes of the system to reduce the general cost. Also, The results are evaluated through shadow observation and user study
Real-time Illumination and Visual Coherence for Photorealistic Augmented/Mixed Reality
A realistically inserted virtual object in the real-time physical environment is a desirable feature in augmented reality (AR) applications and mixed reality (MR) in general. This problem is considered a vital research area in computer graphics, a field that is experiencing ongoing discovery. The algorithms and methods used to obtain dynamic and real-time illumination measurement, estimating, and rendering of augmented reality scenes are utilized in many applications to achieve a realistic perception by humans. We cannot deny the powerful impact of the continuous development of computer vision and machine learning techniques accompanied by the original computer graphics and image processing methods to provide a significant range of novel AR/MR techniques. These techniques include methods for light source acquisition through image-based lighting or sampling, registering and estimating the lighting conditions, and composition of global illumination. In this review, we discussed the pipeline stages with the details elaborated about the methods and techniques that contributed to the development of providing a photo-realistic rendering, visual coherence, and interactive real-time illumination results in AR/MR
Deformable Beamsplitters: Enhancing Perception with Wide Field of View, Varifocal Augmented Reality Displays
An augmented reality head-mounted display with full environmental awareness could present data in new ways and provide a new type of experience, allowing seamless transitions between real life and virtual content. However, creating a light-weight, optical see-through display providing both focus support and wide field of view remains a challenge. This dissertation describes a new dynamic optical element, the deformable beamsplitter, and its applications for wide field of view, varifocal, augmented reality displays. Deformable beamsplitters combine a traditional deformable membrane mirror and a beamsplitter into a single element, allowing reflected light to be manipulated by the deforming membrane mirror, while transmitted light remains unchanged. This research enables both single element optical design and correct focus while maintaining a wide field of view, as demonstrated by the description and analysis of two prototype hardware display systems which incorporate deformable beamsplitters. As a user changes the depth of their gaze when looking through these displays, the focus of virtual content can quickly be altered to match the real world by simply modulating air pressure in a chamber behind the deformable beamsplitter; thus ameliorating vergence–accommodation conflict. Two user studies verify the display prototypes’ capabilities and show the potential of the display in enhancing human performance at quickly perceiving visual stimuli. This work shows that near-eye displays built with deformable beamsplitters allow for simple optical designs that enable wide field of view and comfortable viewing experiences with the potential to enhance user perception.Doctor of Philosoph
The delta radiance field
The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well.
Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production.
The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand.
Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now
XR-RF Imaging Enabled by Software-Defined Metasurfaces and Machine Learning: Foundational Vision, Technologies and Challenges
We present a new approach to Extended Reality (XR), denoted as iCOPYWAVES,
which seeks to offer naturally low-latency operation and cost-effectiveness,
overcoming the critical scalability issues faced by existing solutions.
iCOPYWAVES is enabled by emerging PWEs, a recently proposed technology in
wireless communications. Empowered by intelligent (meta)surfaces, PWEs
transform the wave propagation phenomenon into a software-defined process. We
leverage PWEs to i) create, and then ii) selectively copy the scattered RF
wavefront of an object from one location in space to another, where a machine
learning module, accelerated by FPGAs, translates it to visual input for an XR
headset using PWEdriven, RF imaging principles (XR-RF). This makes for an XR
system whose operation is bounded in the physical layer and, hence, has the
prospects for minimal end-to-end latency. Over large distances,
RF-to-fiber/fiber-to-RF is employed to provide intermediate connectivity. The
paper provides a tutorial on the iCOPYWAVES system architecture and workflow. A
proof-of-concept implementation via simulations is provided, demonstrating the
reconstruction of challenging objects in iCOPYWAVES produced computer graphics
An interest point based illumination condition matching approach to photometric registration within augmented reality worlds
With recent and continued increases in computing power, and advances in the field of computer graphics, realistic augmented reality environments can now offer inexpensive and powerful solutions in a whole range of training, simulation and leisure applications. One key challenge to maintaining convincing augmentation, and therefore user immersion, is ensuring consistent illumination conditions between virtual and real environments, so that objects appear to be lit by the same light sources. This research demonstrates how real world lighting conditions can be determined from the two-dimensional view of the user. Virtual objects can then be illuminated and virtual shadows cast using these conditions. This new technique uses pairs of interest points from real objects and the shadows that they cast, viewed from a binocular perspective, to determine the position of the illuminant. This research has been initially focused on single point light sources in order to show the potential of the technique and has investigated the relationships between the many parameters of the vision system. Optimal conditions have been discovered by mapping the results of experimentally varying parameters such as FoV, camera angle and pose, image resolution, aspect ratio and illuminant distance. The technique is able to provide increased robustness where greater resolution imagery is used. Under optimal conditions it is possible to derive the position of a real world light source with low average error. An investigation of available literature has revealed that other techniques can be inflexible, slow, or disrupt scene realism. This technique is able to locate and track a moving illuminant within an unconstrained, dynamic world without the use of artificial calibration objects that would disrupt scene realism. The technique operates in real-time as the new algorithms are of low computational complexity. This allows high framerates to be maintained within augmented reality applications. Illuminant updates occur several times a second on an average to high end desktop computer. Future work will investigate the automatic identification and selection of pairs of interest points and the exploration of global illuminant conditions. The latter will include an analysis of more complex scenes and the consideration of multiple and varied light sources.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
A psychophysical investigation of global illumination algorithms used in augmented reality
Global illumination rendering algorithms are capable of producing images that are visually realistic. However, this typically comes at a large computational expense. The overarching goal of this research was to compare different rendering solutions in order to understand why some yield better results when applied to rendering synthetic objects into real photographs. As rendered images are ultimately viewed by human observers, it was logical to use psychophysics to investigate these differences. A psychophysical experiment was conducted judging the composite images for accuracy to the original photograph. In addition, iCAM, an image color appearance model, was used to calculate image differences for the same set of images. In general it was determined that any full global illumination is better than direct illumination solutions only. Also, it was discovered that the full rendering with all of its artifacts is not necessarily an indicator of judged accuracy for the final composite image. Finally, initial results show promise in using iCAM to predict a relationship similar to the psychophysics, which could eventually be used in-the-rendering-loop to achieve photo-realism
- …