1,727 research outputs found

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    Fidelity metrics for virtual environment simulations based on spatial memory awareness states

    Get PDF
    This paper describes a methodology based on human judgments of memory awareness states for assessing the simulation fidelity of a virtual environment (VE) in relation to its real scene counterpart. To demonstrate the distinction between task performance-based approaches and additional human evaluation of cognitive awareness states, a photorealistic VE was created. Resulting scenes displayed on a headmounted display (HMD) with or without head tracking and desktop monitor were then compared to the real-world task situation they represented, investigating spatial memory after exposure. Participants described how they completed their spatial recollections by selecting one of four choices of awareness states after retrieval in an initial test and a retention test a week after exposure to the environment. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection and also included guesses, even if informed. Experimental results revealed variations in the distribution of participants’ awareness states across conditions while, in certain cases, task performance failed to reveal any. Experimental conditions that incorporated head tracking were not associated with visually induced recollections. Generally, simulation of task performance does not necessarily lead to simulation of the awareness states involved when completing a memory task. The general premise of this research focuses on how tasks are achieved, rather than only on what is achieved. The extent to which judgments of human memory recall, memory awareness states, and presence in the physical and VE are similar provides a fidelity metric of the simulation in question

    Play and Learn: Using Video Games to Train Computer Vision Models

    Full text link
    Video games are a compelling source of annotated data as they can readily provide fine-grained groundtruth for diverse tasks. However, it is not clear whether the synthetically generated data has enough resemblance to the real-world images to improve the performance of computer vision models in practice. We present experiments assessing the effectiveness on real-world data of systems trained on synthetic RGB images that are extracted from a video game. We collected over 60000 synthetic samples from a modern video game with similar conditions to the real-world CamVid and Cityscapes datasets. We provide several experiments to demonstrate that the synthetically generated RGB images can be used to improve the performance of deep neural networks on both image segmentation and depth estimation. These results show that a convolutional network trained on synthetic data achieves a similar test error to a network that is trained on real-world data for dense image classification. Furthermore, the synthetically generated RGB images can provide similar or better results compared to the real-world datasets if a simple domain adaptation technique is applied. Our results suggest that collaboration with game developers for an accessible interface to gather data is potentially a fruitful direction for future work in computer vision.Comment: To appear in the British Machine Vision Conference (BMVC), September 2016. -v2: fixed a typo in the reference

    Real-time cartoon-like stylization of AR video streams on the GPU

    Get PDF
    The ultimate goal of many applications of augmented reality is to immerse the user into the augmented scene, which is enriched with virtual models. In order to achieve this immersion, it is necessary to create the visual impression that the graphical objects are a natural part of the user’s environment. Producing this effect with conventional computer graphics algorithms is a complex task. Various rendering artifacts in the three-dimensional graphics create a noticeable visual discrepancy between the real background image and virtual objects. We have recently proposed a novel approach to generating an augmented video stream. With this new method, the output images are a non-photorealistic reproduction of the augmented environment. Special stylization methods are applied to both the background camera image and the virtual objects. This way the visual realism of both the graphical foreground and the real background image is reduced, so that they are less distinguishable from each other. Here, we present a new method for the cartoon-like stylization of augmented reality images, which uses a novel post-processing filter for cartoon-like color segmentation and high-contrast silhouettes. In order to make a fast postprocessing of rendered images possible, the programmability of modern graphics hardware is exploited. We describe an implementation of the algorithm using the OpenGL Shading Language. The system is capable of generating a stylized augmented video stream of high visual quality at real-time frame rates. As an example application, we demonstrate the visualization of dinosaur bone datasets in stylized augmented reality
    • 

    corecore