18,712 research outputs found

    Fusing Multimedia Data Into Dynamic Virtual Environments

    Get PDF
    In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotemporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identified several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality. Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education

    An Augmented Reality system for the treatment of phobia to small animals viewed via an optical see-through HMD. Comparison with a similar system viewed via a video see-through

    Full text link
    This article presents an optical see-through (OST) Augmented Reality system for the treatment of phobia to small animals. The technical characteristics of the OST system are described, and a comparative study of the sense of presence and anxiety in a nonphobic population (24 participants) using the OST and an equivalent video see-though (VST) system is presented. The results indicate that if all participants are analyzed, the VST system induces greater sense of presence than the OST system. If the participants who had more fear are analyzed, the two systems induce a similar sense of presence. For the anxiety level, the two systems provoke similar and significant anxiety during the experiment. © Taylor & Francis Group, LLC.Juan, M.; Calatrava, J. (2011). An Augmented Reality system for the treatment of phobia to small animals viewed via an optical see-through HMD. Comparison with a similar system viewed via a video see-through. International Journal of Human-Computer Interaction. 27(5):436-449. doi:10.1080/10447318.2011.552059S436449275Azuma, R. and Bishop, G. Improving static and dynamic registration in an optical see-through HMD. Proceedings of 21st Annual Conference on Computer Graphics and Interactive techniques (SIGGRAPH'94). pp.197–204.Bimber, O., & Raskar, R. (2005). Spatial Augmented Reality. doi:10.1201/b10624Botella, C., Quero, S., Banos, R. M., Garcia-Palacios, A., Breton-Lopez, J., Alcaniz, M., & Fabregat, S. (2008). Telepsychology and Self-Help: The Treatment of Phobias Using the Internet. CyberPsychology & Behavior, 11(6), 659-664. doi:10.1089/cpb.2008.0012Botella, C. M., Juan, M. C., Baños, R. M., Alcañiz, M., Guillén, V., & Rey, B. (2005). Mixing Realities? An Application of Augmented Reality for the Treatment of Cockroach Phobia. CyberPsychology & Behavior, 8(2), 162-171. doi:10.1089/cpb.2005.8.162Carlin, A. S., Hoffman, H. G., & Weghorst, S. (1997). Virtual reality and tactile augmentation in the treatment of spider phobia: a case report. Behaviour Research and Therapy, 35(2), 153-158. doi:10.1016/s0005-7967(96)00085-xGarcia-Palacios, A., Hoffman, H., Carlin, A., Furness, T. ., & Botella, C. (2002). Virtual reality in the treatment of spider phobia: a controlled study. Behaviour Research and Therapy, 40(9), 983-993. doi:10.1016/s0005-7967(01)00068-7Genc, Y., Tuceryan, M., & Navab, N. (s. f.). Practical solutions for calibration of optical see-through devices. Proceedings. International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2002.1115086Hoffman, H. G., Garcia-Palacios, A., Carlin, A., Furness III, T. A., & Botella-Arbona, C. (2003). Interfaces That Heal: Coupling Real and Virtual Objects to Treat Spider Phobia. International Journal of Human-Computer Interaction, 16(2), 283-300. doi:10.1207/s15327590ijhc1602_08Juan, M. C., Alcaniz, M., Monserrat, C., Botella, C., Banos, R. M., & Guerrero, B. (2005). Using Augmented Reality to Treat Phobias. IEEE Computer Graphics and Applications, 25(6), 31-37. doi:10.1109/mcg.2005.143Juan, M. C., Baños, R., Botella, C., Pérez, D., Alcaníiz, M., & Monserrat, C. (2006). An Augmented Reality System for the Treatment of Acrophobia: The Sense of Presence Using Immersive Photography. Presence: Teleoperators and Virtual Environments, 15(4), 393-402. doi:10.1162/pres.15.4.393Kato, H., & Billinghurst, M. (s. f.). Marker tracking and HMD calibration for a video-based augmented reality conferencing system. Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99). doi:10.1109/iwar.1999.803809Nash, E. B., Edwards, G. W., Thompson, J. A., & Barfield, W. (2000). A Review of Presence and Performance in Virtual Environments. International Journal of Human-Computer Interaction, 12(1), 1-41. doi:10.1207/s15327590ijhc1201_1Owen, C. B., Ji Zhou, Tang, A., & Fan Xiao. (s. f.). Display-Relative Calibration for Optical See-Through Head-Mounted Displays. Third IEEE and ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2004.28Özbek, C., Giesler, B. and Dillmann, R. Jedi training: Playful evaluation of head-mounted augmented reality display systems. SPIE Conference Medical Imaging. Vol. 5291, pp.454–463.Renaud, P., Bouchard, S., & Proulx, R. (2002). Behavioral avoidance dynamics in the presence of a virtual spider. IEEE Transactions on Information Technology in Biomedicine, 6(3), 235-243. doi:10.1109/titb.2002.802381Schwald, B. and Laval, B. An Augmented Reality system for training and assistance to maintenance in the industrial context. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. pp.425–432.Slater, M., Usoh, M., & Steed, A. (1994). Depth of Presence in Virtual Environments. Presence: Teleoperators and Virtual Environments, 3(2), 130-144. doi:10.1162/pres.1994.3.2.130Szymanski, J., & O’Donohue, W. (1995). Fear of Spiders Questionnaire. Journal of Behavior Therapy and Experimental Psychiatry, 26(1), 31-34. doi:10.1016/0005-7916(94)00072-

    Automatic generation of affective 3D virtual environments from 2D images

    Get PDF
    Today, a wide range of domains encompassing, e.g., movie and video game production, virtual reality simulations, augmented reality applications, make a massive use of 3D computer generated assets. Although many graphics suites already offer a large set of tools and functionalities to manage the creation of such contents, they are usually characterized by a steep learning curve. This aspect could make it difficult for non-expert users to create 3D scenes for, e.g., sharing their ideas or for prototyping purposes. This paper presents a computer-based system that is able to generate a possible reconstruction of a 3D scene depicted in a 2D image, by inferring objects, materials, textures, lights, and camera required for rendering. The integration of the proposed system into a well known graphics suite enables further refinements of the generated scene using traditional techniques. Moreover, the system allows the users to explore the scene into an immersive virtual environment for better understanding the current objects’ layout, and provides the possibility to convey emotions through specific aspects of the generated scene. The paper also reports the results of a user study that was carried out to evaluate the usability of the proposed system from different perspectives

    Identifying immersive environments’ most relevant research topics: an instrument to query researchers and practitioners

    Get PDF
    This paper provides an instrument for ascertaining researchers’ perspectives on the relative relevance of technological challenges facing immersive environments in view of their adoption in learning contexts, along three dimensions: access, content production, and deployment. It described its theoretical grounding and expert-review process, from a set of previously-identified challenges and expert feedback cycles. The paper details the motivation, setup, and methods employed, as well as the issues detected in the cycles and how they were addressed while developing the instrument. As a research instrument, it aims to be employed across diverse communities of research and practice, helping direct research efforts and hence contribute to wider use of immersive environments in learning, and possibly contribute towards the development of news and more adequate systems.The work presented herein has been partially funded under the European H2020 program H2020-ICT-2015, BEACONING project, grant agreement nr. 687676.info:eu-repo/semantics/publishedVersio

    Augmented reality-based visual-haptic modeling for thoracoscopic surgery training systems

    Get PDF
    Background: Compared with traditional thoracotomy, video-assisted thoracoscopic surgery (VATS) has less minor trauma, faster recovery, higher patient compliance, but higher requirements for surgeons. Virtual surgery training simulation systems are important and have been widely used in Europe and America. Augmented reality (AR) in surgical training simulation systems significantly improve the training effect of virtual surgical training, although AR technology is still in its initial stage. Mixed reality has gained increased attention in technology-driven modern medicine but has yet to be used in everyday practice. Methods: This study proposed an immersive AR lobectomy within a thoracoscope surgery training system, using visual and haptic modeling to study the potential benefits of this critical technology. The content included immersive AR visual rendering, based on the cluster-based extended position-based dynamics algorithm of soft tissue physical modeling. Furthermore, we designed an AR haptic rendering systems, whose model architecture consisted of multi-touch interaction points, including kinesthetic and pressure-sensitive points. Finally, based on the above theoretical research, we developed an AR interactive VATS surgical training platform. Results: Twenty-four volunteers were recruited from the First People's Hospital of Yunnan Province to evaluate the VATS training system. Face, content, and construct validation methods were used to assess the tactile sense, visual sense, scene authenticity, and simulator performance. Conclusions: The results of our construction validation demonstrate that the simulator is useful in improving novice and surgical skills that can be retained after a certain period of time. The video-assisted thoracoscopic system based on AR developed in this study is effective and can be used as a training device to assist in the development of thoracoscopic skills for novices
    • …
    corecore