61,691 research outputs found

    Combining independent visualization and tracking systems for augmented reality

    Get PDF
    The basic requirement for the successful deployment of a mobile augmented reality application is a reliable tracking system with high accuracy. Recently, a helmet-based inside-out tracking system which meets this demand has been proposed for self-localization in buildings. To realize an augmented reality application based on this tracking system, a display has to be added for visualization purposes. Therefore, the relative pose of this visualization platform with respect to the helmet has to be tracked. In the case of hand-held visualization platforms like smartphones or tablets, this can be achieved by means of image-based tracking methods like marker-based or model-based tracking. In this paper, we present two marker-based methods for tracking the relative pose between the helmet-based tracking system and a tablet-based visualization system. Both methods were implemented and comparatively evaluated in terms of tracking accuracy. Our results show that mobile inside-out tracking systems without integrated displays can easily be supplemented with a hand-held tablet as visualization device for augmented reality purposes

    Fusing Self-Reported and Sensor Data from Mixed-Reality Training

    Get PDF
    Military and industrial use of smaller, more accurate sensors are allowing increasing amounts of data to be acquired at diminishing costs during training. Traditional human subject testing often collects qualitative data from participants through self-reported questionnaires. This qualitative information is valuable but often incomplete to assess training outcomes. Quantitative information such as motion tracking data, communication frequency, and heart rate can offer the missing pieces in training outcome assessment. The successful fusion and analysis of qualitative and quantitative information sources is necessary for collaborative, mixed-reality, and augmented-reality training to reach its full potential. The challenge is determining a reliable framework combining these multiple types of data. Methods were developed to analyze data acquired during a formal user study assessing the use of augmented reality as a delivery mechanism for digital work instructions. A between-subjects experiment was conducted to analyze the use of a desktop computer, mobile tablet, or mobile tablet with augmented reality as a delivery method of these instructions. Study participants were asked to complete a multi-step technical assembly. Participants’ head position and orientation were tracked using an infrared tracking system. User interaction in the form of interface button presses was recorded and time stamped on each step of the assembly. A trained observer took notes on task performance during the study through a set of camera views that recorded the work area. Finally, participants each completed pre and post-surveys involving self-reported evaluation. The combination of quantitative and qualitative data revealed trends in the data such as the most difficult tasks across each device, which would have been impossible to determine from self-reporting alone. This paper describes the methods developed to fuse the qualitative data with quantified measurements recorded during the study

    A Flexible-Frame-Rate Vision-Aided Inertial Object Tracking System for Mobile Devices

    Full text link
    Real-time object pose estimation and tracking is challenging but essential for emerging augmented reality (AR) applications. In general, state-of-the-art methods address this problem using deep neural networks which indeed yield satisfactory results. Nevertheless, the high computational cost of these methods makes them unsuitable for mobile devices where real-world applications usually take place. In addition, head-mounted displays such as AR glasses require at least 90~FPS to avoid motion sickness, which further complicates the problem. We propose a flexible-frame-rate object pose estimation and tracking system for mobile devices. It is a monocular visual-inertial-based system with a client-server architecture. Inertial measurement unit (IMU) pose propagation is performed on the client side for high speed tracking, and RGB image-based 3D pose estimation is performed on the server side to obtain accurate poses, after which the pose is sent to the client side for visual-inertial fusion, where we propose a bias self-correction mechanism to reduce drift. We also propose a pose inspection algorithm to detect tracking failures and incorrect pose estimation. Connected by high-speed networking, our system supports flexible frame rates up to 120 FPS and guarantees high precision and real-time tracking on low-end devices. Both simulations and real world experiments show that our method achieves accurate and robust object tracking

    An Augmented Reality system for the treatment of phobia to small animals viewed via an optical see-through HMD. Comparison with a similar system viewed via a video see-through

    Full text link
    This article presents an optical see-through (OST) Augmented Reality system for the treatment of phobia to small animals. The technical characteristics of the OST system are described, and a comparative study of the sense of presence and anxiety in a nonphobic population (24 participants) using the OST and an equivalent video see-though (VST) system is presented. The results indicate that if all participants are analyzed, the VST system induces greater sense of presence than the OST system. If the participants who had more fear are analyzed, the two systems induce a similar sense of presence. For the anxiety level, the two systems provoke similar and significant anxiety during the experiment. © Taylor & Francis Group, LLC.Juan, M.; Calatrava, J. (2011). An Augmented Reality system for the treatment of phobia to small animals viewed via an optical see-through HMD. Comparison with a similar system viewed via a video see-through. International Journal of Human-Computer Interaction. 27(5):436-449. doi:10.1080/10447318.2011.552059S436449275Azuma, R. and Bishop, G. Improving static and dynamic registration in an optical see-through HMD. Proceedings of 21st Annual Conference on Computer Graphics and Interactive techniques (SIGGRAPH'94). pp.197–204.Bimber, O., & Raskar, R. (2005). Spatial Augmented Reality. doi:10.1201/b10624Botella, C., Quero, S., Banos, R. M., Garcia-Palacios, A., Breton-Lopez, J., Alcaniz, M., & Fabregat, S. (2008). Telepsychology and Self-Help: The Treatment of Phobias Using the Internet. CyberPsychology & Behavior, 11(6), 659-664. doi:10.1089/cpb.2008.0012Botella, C. M., Juan, M. C., Baños, R. M., Alcañiz, M., Guillén, V., & Rey, B. (2005). Mixing Realities? An Application of Augmented Reality for the Treatment of Cockroach Phobia. CyberPsychology & Behavior, 8(2), 162-171. doi:10.1089/cpb.2005.8.162Carlin, A. S., Hoffman, H. G., & Weghorst, S. (1997). Virtual reality and tactile augmentation in the treatment of spider phobia: a case report. Behaviour Research and Therapy, 35(2), 153-158. doi:10.1016/s0005-7967(96)00085-xGarcia-Palacios, A., Hoffman, H., Carlin, A., Furness, T. ., & Botella, C. (2002). Virtual reality in the treatment of spider phobia: a controlled study. Behaviour Research and Therapy, 40(9), 983-993. doi:10.1016/s0005-7967(01)00068-7Genc, Y., Tuceryan, M., & Navab, N. (s. f.). Practical solutions for calibration of optical see-through devices. Proceedings. International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2002.1115086Hoffman, H. G., Garcia-Palacios, A., Carlin, A., Furness III, T. A., & Botella-Arbona, C. (2003). Interfaces That Heal: Coupling Real and Virtual Objects to Treat Spider Phobia. International Journal of Human-Computer Interaction, 16(2), 283-300. doi:10.1207/s15327590ijhc1602_08Juan, M. C., Alcaniz, M., Monserrat, C., Botella, C., Banos, R. M., & Guerrero, B. (2005). Using Augmented Reality to Treat Phobias. IEEE Computer Graphics and Applications, 25(6), 31-37. doi:10.1109/mcg.2005.143Juan, M. C., Baños, R., Botella, C., Pérez, D., Alcaníiz, M., & Monserrat, C. (2006). An Augmented Reality System for the Treatment of Acrophobia: The Sense of Presence Using Immersive Photography. Presence: Teleoperators and Virtual Environments, 15(4), 393-402. doi:10.1162/pres.15.4.393Kato, H., & Billinghurst, M. (s. f.). Marker tracking and HMD calibration for a video-based augmented reality conferencing system. Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99). doi:10.1109/iwar.1999.803809Nash, E. B., Edwards, G. W., Thompson, J. A., & Barfield, W. (2000). A Review of Presence and Performance in Virtual Environments. International Journal of Human-Computer Interaction, 12(1), 1-41. doi:10.1207/s15327590ijhc1201_1Owen, C. B., Ji Zhou, Tang, A., & Fan Xiao. (s. f.). Display-Relative Calibration for Optical See-Through Head-Mounted Displays. Third IEEE and ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2004.28Özbek, C., Giesler, B. and Dillmann, R. Jedi training: Playful evaluation of head-mounted augmented reality display systems. SPIE Conference Medical Imaging. Vol. 5291, pp.454–463.Renaud, P., Bouchard, S., & Proulx, R. (2002). Behavioral avoidance dynamics in the presence of a virtual spider. IEEE Transactions on Information Technology in Biomedicine, 6(3), 235-243. doi:10.1109/titb.2002.802381Schwald, B. and Laval, B. An Augmented Reality system for training and assistance to maintenance in the industrial context. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. pp.425–432.Slater, M., Usoh, M., & Steed, A. (1994). Depth of Presence in Virtual Environments. Presence: Teleoperators and Virtual Environments, 3(2), 130-144. doi:10.1162/pres.1994.3.2.130Szymanski, J., & O’Donohue, W. (1995). Fear of Spiders Questionnaire. Journal of Behavior Therapy and Experimental Psychiatry, 26(1), 31-34. doi:10.1016/0005-7916(94)00072-

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    Enabling Self-aware Smart Buildings by Augmented Reality

    Full text link
    Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of "self-aware" smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using "augmented reality". The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.Comment: This paper appears in ACM International Conference on Future Energy Systems (e-Energy), 201

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio
    • …
    corecore