614 research outputs found

    Collaboration in Augmented Reality: How to establish coordination and joint attention?

    Get PDF
    Schnier C, Pitsch K, Dierker A, Hermann T. Collaboration in Augmented Reality: How to establish coordination and joint attention? In: Boedker S, Bouvin NO, Lutters W, Wulf V, Ciolfi L, eds. Proceedings of the 12th European Conference on Computer Supported Cooperative Work (ECSCW 2011). Springer-Verlag London; 2011: 405-416.We present an initial investigation from a semi-experimental setting, in which an HMD-based AR-system has been used for real-time collaboration in a task-oriented scenario (design of a museum exhibition). Analysis points out the specific conditions of interacting in an AR environment and focuses on one particular practical problem for the participants in coordinating their interaction: how to establish joint attention towards the same object or referent. Analysis allows insights into how the pair of users begins to familarize with the environment, the limitations and opportunities of the setting and how they establish new routines for e.g. solving the ʻjoint attentionʼ-problem

    An Augmented Reality system for the treatment of phobia to small animals viewed via an optical see-through HMD. Comparison with a similar system viewed via a video see-through

    Full text link
    This article presents an optical see-through (OST) Augmented Reality system for the treatment of phobia to small animals. The technical characteristics of the OST system are described, and a comparative study of the sense of presence and anxiety in a nonphobic population (24 participants) using the OST and an equivalent video see-though (VST) system is presented. The results indicate that if all participants are analyzed, the VST system induces greater sense of presence than the OST system. If the participants who had more fear are analyzed, the two systems induce a similar sense of presence. For the anxiety level, the two systems provoke similar and significant anxiety during the experiment. © Taylor & Francis Group, LLC.Juan, M.; Calatrava, J. (2011). An Augmented Reality system for the treatment of phobia to small animals viewed via an optical see-through HMD. Comparison with a similar system viewed via a video see-through. International Journal of Human-Computer Interaction. 27(5):436-449. doi:10.1080/10447318.2011.552059S436449275Azuma, R. and Bishop, G. Improving static and dynamic registration in an optical see-through HMD. Proceedings of 21st Annual Conference on Computer Graphics and Interactive techniques (SIGGRAPH'94). pp.197–204.Bimber, O., & Raskar, R. (2005). Spatial Augmented Reality. doi:10.1201/b10624Botella, C., Quero, S., Banos, R. M., Garcia-Palacios, A., Breton-Lopez, J., Alcaniz, M., & Fabregat, S. (2008). Telepsychology and Self-Help: The Treatment of Phobias Using the Internet. CyberPsychology & Behavior, 11(6), 659-664. doi:10.1089/cpb.2008.0012Botella, C. M., Juan, M. C., Baños, R. M., Alcañiz, M., Guillén, V., & Rey, B. (2005). Mixing Realities? An Application of Augmented Reality for the Treatment of Cockroach Phobia. CyberPsychology & Behavior, 8(2), 162-171. doi:10.1089/cpb.2005.8.162Carlin, A. S., Hoffman, H. G., & Weghorst, S. (1997). Virtual reality and tactile augmentation in the treatment of spider phobia: a case report. Behaviour Research and Therapy, 35(2), 153-158. doi:10.1016/s0005-7967(96)00085-xGarcia-Palacios, A., Hoffman, H., Carlin, A., Furness, T. ., & Botella, C. (2002). Virtual reality in the treatment of spider phobia: a controlled study. Behaviour Research and Therapy, 40(9), 983-993. doi:10.1016/s0005-7967(01)00068-7Genc, Y., Tuceryan, M., & Navab, N. (s. f.). Practical solutions for calibration of optical see-through devices. Proceedings. International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2002.1115086Hoffman, H. G., Garcia-Palacios, A., Carlin, A., Furness III, T. A., & Botella-Arbona, C. (2003). Interfaces That Heal: Coupling Real and Virtual Objects to Treat Spider Phobia. International Journal of Human-Computer Interaction, 16(2), 283-300. doi:10.1207/s15327590ijhc1602_08Juan, M. C., Alcaniz, M., Monserrat, C., Botella, C., Banos, R. M., & Guerrero, B. (2005). Using Augmented Reality to Treat Phobias. IEEE Computer Graphics and Applications, 25(6), 31-37. doi:10.1109/mcg.2005.143Juan, M. C., Baños, R., Botella, C., Pérez, D., Alcaníiz, M., & Monserrat, C. (2006). An Augmented Reality System for the Treatment of Acrophobia: The Sense of Presence Using Immersive Photography. Presence: Teleoperators and Virtual Environments, 15(4), 393-402. doi:10.1162/pres.15.4.393Kato, H., & Billinghurst, M. (s. f.). Marker tracking and HMD calibration for a video-based augmented reality conferencing system. Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99). doi:10.1109/iwar.1999.803809Nash, E. B., Edwards, G. W., Thompson, J. A., & Barfield, W. (2000). A Review of Presence and Performance in Virtual Environments. International Journal of Human-Computer Interaction, 12(1), 1-41. doi:10.1207/s15327590ijhc1201_1Owen, C. B., Ji Zhou, Tang, A., & Fan Xiao. (s. f.). Display-Relative Calibration for Optical See-Through Head-Mounted Displays. Third IEEE and ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2004.28Özbek, C., Giesler, B. and Dillmann, R. Jedi training: Playful evaluation of head-mounted augmented reality display systems. SPIE Conference Medical Imaging. Vol. 5291, pp.454–463.Renaud, P., Bouchard, S., & Proulx, R. (2002). Behavioral avoidance dynamics in the presence of a virtual spider. IEEE Transactions on Information Technology in Biomedicine, 6(3), 235-243. doi:10.1109/titb.2002.802381Schwald, B. and Laval, B. An Augmented Reality system for training and assistance to maintenance in the industrial context. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. pp.425–432.Slater, M., Usoh, M., & Steed, A. (1994). Depth of Presence in Virtual Environments. Presence: Teleoperators and Virtual Environments, 3(2), 130-144. doi:10.1162/pres.1994.3.2.130Szymanski, J., & O’Donohue, W. (1995). Fear of Spiders Questionnaire. Journal of Behavior Therapy and Experimental Psychiatry, 26(1), 31-34. doi:10.1016/0005-7916(94)00072-

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Utilizing image guided surgery for user interaction in medical augmented reality

    Get PDF
    The graphical overlay of additional medical information over the patient during a surgical procedure has long been considered one of the most promising applications of augmented reality. While many experimental systems for augmented reality in medicine have reached an advanced state and can deliver high-quality augmented video streams, they usually depend heavily on specialized dedicated hardware. Such dedicated system components, which originally have been designed for engineering applications or VR research, often are ill-suited for use in the clinical practice. We have described a novel medical augmented reality application, which is based almost exclusively on existing, commercially available, and certified medical equipment. In our system, a so-called image guided surgery device is used for tracking a webcam, which delivers the digital video stream of the physical scene that is augmented with the virtual information. In this paper, we show how the capability of the image guided surgery system for tracking surgical instruments can be harnessed for user interaction. Our method enables the user to define points and freely drawn shapes in 3-d and provides selectable menu items, which can be located in immediate proximity to the patient. This eliminates the need for conventional touchscreen- or mouse-based user interaction without requiring additional dedicated hardware like dedicated tracking systems or specialized 3-d input devices. Thus the surgeon can directly interact with the system, without the help of additional personnel. We demonstrate our new input method with an application for creating operation plan sketches directly on the patient in an augmented view
    • …
    corecore