100,043 research outputs found

    Robust tracking for augmented reality

    Get PDF
    In this paper a method for improving a tracking algorithm in an augmented reality application is presented. This method addresses several issues to this particular application, like marker-less tracking and color constancy with low quality cameras, or precise tracking with real-time constraints. Due to size restrictions some of the objects are tracked using color information. To improve the quality of the detection, a color selection scheme is proposed to increase color distance between different objects in the scene. Moreover, a new color constancy method based in a diagonal-offset model and k-means is presented. Finally, some real images are used to show the improvement with this new method.Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. Ministry of Education of Spain (TIN2013-42253P), Junta de Andalucía of Spain (TIC-1692

    An Efficient and Robust Mobile Augmented Reality Application

    Get PDF
    AR technology is perceived to be evolved from the bases of Virtual Reality (VR) technology. The ultimate goal of AR is to provide better management and ubiquitous access to information by using seamless techniques in which the interactive real world is combined with an interactive computer-generated world in one coherent environment. The direction of research in the field of AR has been shifted from traditional Desktop based mediums to the mobile devices such as the smartphones. However, image recognition on smartphones enforces many restrictions and challenges in the form of efficiency and robustness which are the general performance measurement of image recognition. Smart phones have limited processing capabilities as compared to the PC platform, hence the process of mobile AR application development and use of image recognition algorithm need to be emphasised. The processes of mobile AR application development include detection, description and matching. All the processes and algorithms need to be carefully selected in order to create an efficient and robust mobile AR application. The algorithm used in this work for detection, description and matching are AGAST, FREAK and Hamming distance respectively. The computation time, robustness towards rotation, scale and brightness are evaluated. The dataset used to evaluate the mobile AR application is the benchmark dataset; Mikolajczyk. The results showed that the mobile AR application is efficient with a computation time of 29.1ms. The robustness towards scale, rotation and brightness changes of the mobile AR application also obtained high accuracy which is 89.76%, 87.71% and 83.87% respectively. Hence, combination of algorithm AGAST, FREAK and Hamming distance are suitable to create an efficient and robust mobile AR application

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes

    Augmented Reality Future Step Visualization for Robust Surgical Telementoring

    Get PDF
    Introduction Surgical telementoring connects expert mentors with trainees performing urgent care in austere environments. However, such environments impose unreliable network quality, with significant latency and low bandwidth. We have developed an augmented reality telementoring system that includes future step visualization of the medical procedure. Pregenerated video instructions of the procedure are dynamically overlaid onto the trainee's view of the operating field when the network connection with a mentor is unreliable. Methods Our future step visualization uses a tablet suspended above the patient's body, through which the trainee views the operating field. Before trainee use, an expert records a “future library” of step-by-step video footage of the operation. Videos are displayed to the trainee as semitransparent graphical overlays. We conducted a study where participants completed a cricothyroidotomy under telementored guidance. Participants used one of two telementoring conditions: conventional telestrator or our system with future step visualization. During the operation, the connection between trainee and mentor was bandwidth throttled. Recorded metrics were idle time ratio, recall error, and task performance. Results Participants in the future step visualization condition had 48% smaller idle time ratio (14.5% vs. 27.9%, P < 0.001), 26% less recall error (119 vs. 161, P = 0.042), and 10% higher task performance scores (rater 1 = 90.83 vs. 81.88, P = 0.008; rater 2 = 88.54 vs. 79.17, P = 0.042) than participants in the telestrator condition. Conclusions Future step visualization in surgical telementoring is an important fallback mechanism when trainee/mentor network connection is poor, and it is a key step towards semiautonomous and then completely mentor-free medical assistance systems

    Software Framework for Customized Augmented Reality Headsets in Medicine

    Get PDF
    The growing availability of self-contained and affordable augmented reality headsets such as the Microsoft HoloLens is encouraging the adoption of these devices also in the healthcare sector. However, technological and human-factor limitations still hinder their routine use in clinical practice. Among them, the major drawbacks are due to their general-purpose nature and to the lack of a standardized framework suited for medical applications and devoid of platform-dependent tracking techniques and/or complex calibration procedures. To overcome such limitations, in this paper we present a software framework that is designed to support the development of augmented reality applications for custom-made head-mounted displays designed to aid high-precision manual tasks. The software platform is highly configurable, computationally efficient, and it allows the deployment of augmented reality applications capable to support in situ visualization of medical imaging data. The framework can provide both optical and video see-through-based augmentations and it features a robust optical tracking algorithm. An experimental study was designed to assess the efficacy of the platform in guiding a simulated task of surgical incision. In the experiments, the user was asked to perform a digital incision task, with and without the aid of the augmented reality headset. The task accuracy was evaluated by measuring the similarity between the traced curve and the planned one. The average error in the augmented reality tests was < 1 mm. The results confirm that the proposed framework coupled with the new-concept headset may boost the integration of augmented reality headsets into routine clinical practice

    Utilizing sensor fusion in markerless mobile augmented reality

    Get PDF
    One of the key challenges of markerless Augmented Reality (AR) systems, where no a priori information of the environment is available, is map and scale initialization. In such systems, the scale is unknown as it is impossible to determine the scale from a sequence of images alone. Implementing scale is vital for ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we demonstrate a sensor and vision fusion approach for robust and user-friendly initialization of map and scale. The map is initialized, using inbuilt accelerometers, whilst scale is initialized by the camera auto-focusing capability. The later is possible by applying the Depth From Focus (DFF) method, which was, till now, limited to high precision camera systems. The demonstrated illustrates benefits of such a system, which is running on a commercially available mobile phone Nokia N900

    Intelligent visualization of holographic biological networks using HoloLens

    Get PDF
    Augmented reality enables users to interact with data in ways that were previously impossible. Microsoft’s HoloLens, a tool capable of projecting 3D holograms into the user’s surroundings, made these interactions in augmented reality possible. Our team utilized this tool with the goal of creating an application which would be capable of interpreting and displaying a 3D network and providing a robust method with which to interact with its contents. We researched frameworks and existing projects for the HoloLens and designed a data visualization application. The resulting software we’ve created displays a 3D network with support for voice commands and gesture inputs to aid user interaction
    • …
    corecore