20,699 research outputs found

    Modeling Color Appearance in Augmented Reality

    Get PDF
    Augmented reality (AR) is a developing technology that is expected to become the next interface between humans and computers. One of the most common designs of AR devices is the optical see-through head- mounted display (HMD). In this design, the virtual content presented on the displays embedded inside the device gets optically superimposed on the real world which results in the virtual content being transparent. Color appearance in see-through designs of AR is a complicated subject, because it depends on many factors including the ambient light, the color appearance of the virtual content and color appearance of the real background. Similar to display technology, it is vital to control the color appearance of content for many applications of AR. In this research, color appearance in the see-through design of augmented reality environment is studied and modeled. Using a bench-top optical mixing apparatus as an AR simulator, objective measurements of mixed colors in AR were performed to study the light behavior in AR environment. Psychophysical color matching experiments were performed to understand color perception in AR. These experiments were performed first for simple 2D stimuli with single color both as background and foreground and later for more visually complex stimuli to better represent real content that is presented in AR. Color perception in AR environment was compared to color perception on a display which showed they are different from each other. The applicability of the CAM16 color appearance model, one of the most comprehensive current color appearance models, in AR environment was evaluated. The results showed that the CAM16 is not accurate in predicting the color appearance in AR environment. In order to model color appearance in AR environment, four approaches were developed using modifications in tristimulus and color appearance spaces, and the best performance was found to be for Approach 2 which was based on predicting the tristimulus values of the mixed content from the background and foreground color

    Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes

    Full text link
    The success of deep learning in computer vision is based on availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment real images with virtual objects. This allows us to create realistic composite images which exhibit both realistic background appearance and a large number of complex object arrangements. In contrast to modeling complete 3D environments, our augmentation approach requires only a few user interactions in combination with 3D shapes of the target object. Through extensive experimentation, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of our approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenes. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on Cityscapes dataset. Our experiments demonstrate that models trained on augmented imagery generalize better than those trained on synthetic data or models trained on limited amount of annotated real data

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    Teegi: Tangible EEG Interface

    Get PDF
    We introduce Teegi, a Tangible ElectroEncephaloGraphy (EEG) Interface that enables novice users to get to know more about something as complex as brain signals, in an easy, en- gaging and informative way. To this end, we have designed a new system based on a unique combination of spatial aug- mented reality, tangible interaction and real-time neurotech- nologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. An exploration study has shown that interacting with Teegi seems to be easy, motivating, reliable and infor- mative. Overall, this suggests that Teegi is a promising and relevant training and mediation tool for the general public.Comment: to appear in UIST-ACM User Interface Software and Technology Symposium, Oct 2014, Honolulu, United State

    Simulation and Visualization of Thermal Metaphor in a Virtual Environment for Thermal Building Assessment

    Get PDF
    La référence est présente sur HAL mais est incomplÚte (il manque les co-auteurs et le fichier pdf).The current application of the design process through energy efficiency in virtual reality (VR) systems is limited mostly to building performance predictions, as the issue of the data formats and the workflow used for 3D modeling, thermal calculation and VR visualization. The importance of energy efficiency and integration of advances in building design and VR technology have lead this research to focus on thermal simulation results visualized in a virtual environment to optimize building design, particularly concerning heritage buildings. The emphasis is on the representation of thermal data of a room simulated in a virtual environment (VE) in order to improve the ways in which thermal analysis data are presented to the building stakeholder, with the aim of increasing accuracy and efficiency. The approach is to present more immersive thermal simulation and to project the calculation results in projective displays particularly in Immersion room (CAVE-like). The main idea concerning the experiment is to provide an instrument of visualization and interaction concerning the thermal conditions in a virtual building. Thus the user can immerge, interact, and perceive the impact of the modifications generated by the system, regarding the thermal simulation results. The research has demonstrated it is possible to improve the representation and interpretation of building performance data, particularly for thermal results using visualization techniques.Direktorat Riset dan Pengabdian Masyarakat (DRPM) Universitas Indonesia Research Grant No. 2191/H2.R12/HKP.05.00/201
    • 

    corecore