329 research outputs found

    Towards markerless orthopaedic navigation with intuitive Optical See-through Head-mounted displays

    Get PDF
    The potential of image-guided orthopaedic navigation to improve surgical outcomes has been well-recognised during the last two decades. According to the tracked pose of target bone, the anatomical information and preoperative plans are updated and displayed to surgeons, so that they can follow the guidance to reach the goal with higher accuracy, efficiency and reproducibility. Despite their success, current orthopaedic navigation systems have two main limitations: for target tracking, artificial markers have to be drilled into the bone and calibrated manually to the bone, which introduces the risk of additional harm to patients and increases operating complexity; for guidance visualisation, surgeons have to shift their attention from the patient to an external 2D monitor, which is disruptive and can be mentally stressful. Motivated by these limitations, this thesis explores the development of an intuitive, compact and reliable navigation system for orthopaedic surgery. To this end, conventional marker-based tracking is replaced by a novel markerless tracking algorithm, and the 2D display is replaced by a 3D holographic Optical see-through (OST) Head-mounted display (HMD) precisely calibrated to a user's perspective. Our markerless tracking, facilitated by a commercial RGBD camera, is achieved through deep learning-based bone segmentation followed by real-time pose registration. For robust segmentation, a new network is designed and efficiently augmented by a synthetic dataset. Our segmentation network outperforms the state-of-the-art regarding occlusion-robustness, device-agnostic behaviour, and target generalisability. For reliable pose registration, a novel Bounded Iterative Closest Point (BICP) workflow is proposed. The improved markerless tracking can achieve a clinically acceptable error of 0.95 deg and 2.17 mm according to a phantom test. OST displays allow ubiquitous enrichment of perceived real world with contextually blended virtual aids through semi-transparent glasses. They have been recognised as a suitable visual tool for surgical assistance, since they do not hinder the surgeon's natural eyesight and require no attention shift or perspective conversion. The OST calibration is crucial to ensure locational-coherent surgical guidance. Current calibration methods are either human error-prone or hardly applicable to commercial devices. To this end, we propose an offline camera-based calibration method that is highly accurate yet easy to implement in commercial products, and an online alignment-based refinement that is user-centric and robust against user error. The proposed methods are proven to be superior to other similar State-of- the-art (SOTA)s regarding calibration convenience and display accuracy. Motivated by the ambition to develop the world's first markerless OST navigation system, we integrated the developed markerless tracking and calibration scheme into a complete navigation workflow designed for femur drilling tasks during knee replacement surgery. We verify the usability of our designed OST system with an experienced orthopaedic surgeon by a cadaver study. Our test validates the potential of the proposed markerless navigation system for surgical assistance, although further improvement is required for clinical acceptance.Open Acces

    Remote maintenance assistance using real-time augmented reality authoring

    Get PDF
    Maintenance operations and lifecycle engineering have largely been considered one of the most expensive and time-consuming components for industrial equipment. Numerous organizations continually devote large quantities of resources towards maintaining equipment. As such, any optimizations that would reduce maintenance errors and expenses could lead to substantial time and cost savings. Unfortunately, there are often not enough specialists to meet the demand, forcing localized technicians to perform on-site maintenance on equipment outside their area of expertise. Augmented reality (AR) is one technology that has already been shown to improve the maintenance process. While powerful, AR has its own set of challenges, from content authoring to spatial perception. This work details a system that puts both the power of AR and the knowledge of a specialist directly into the hands of an on-site technician. An application was developed that enables a specialist to deliver AR instructions in real-time to assist a technician performing on-site maintenance. Using a novel and simplified authoring interface, specialists can create AR content in real-time, with little to no prior knowledge of augmented reality or the system itself. There has been ample research on different AR-supported processes, such as real-time authoring, video monitoring, and off-site assistance. However, much less work has been done to integrate them and leverage existing personnel knowledge to both author and deliver real-time AR instructions. This work details the development and implementation of such a system. A technical evaluation was also performed to ensure real-time connectivity in geographically distributed environments. Three network configurations were evaluated. A high-latency high-bandwidth network was used to represent a typical modern maintenance facility. A low-bandwidth network was evaluated to mimic older or more isolated maintenance environments. Lastly, a 4G LTE network was tested, showing the potential for the system to be used across global locations. Under all network configurations, the system effectively facilitated the complete disassembly of a hydraulic pump assembly

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers

    Enhanced Concrete Bridge Assessment Using Artificial Intelligence and Mixed Reality

    Get PDF
    Conventional methods for visual assessment of civil infrastructures have certain limitations, such as subjectivity of the collected data, long inspection time, and high cost of labor. Although some new technologies (i.e. robotic techniques) that are currently in practice can collect objective, quantified data, the inspector\u27s own expertise is still critical in many instances since these technologies are not designed to work interactively with human inspector. This study aims to create a smart, human-centered method that offers significant contributions to infrastructure inspection, maintenance, management practice, and safety for the bridge owners. By developing a smart Mixed Reality (MR) framework, which can be integrated into a wearable holographic headset device, a bridge inspector, for example, can automatically analyze a certain defect such as a crack that he or she sees on an element, display its dimension information in real-time along with the condition state. Such systems can potentially decrease the time and cost of infrastructure inspections by accelerating essential tasks of the inspector such as defect measurement, condition assessment and data processing to management systems. The human centered artificial intelligence (AI) will help the inspector collect more quantified and objective data while incorporating inspector\u27s professional judgment. This study explains in detail the described system and related methodologies of implementing attention guided semi-supervised deep learning into mixed reality technology, which interacts with the human inspector during assessment. Thereby, the inspector and the AI will collaborate/communicate for improved visual inspection

    An approach for precise 2D/3D semantic annotation of spatially-oriented images for in-situ visualization applications

    Get PDF
    Thanks to nowadays technologies, innovative tools afford to increase our knowledge of historic monuments, in the field of preservation and valuation of cultural heritage. These tools are aimed to help experts to create, enrich and share information on historical buildings. Among the various documentary sources, photographs contain a high level of details about shapes and colors. With the development of image analysis and image-based-modeling techniques, large sets of images can be spatially oriented towards a digital mock-up. For these reasons, digital photographs prove to be an easy to use, affordable and flexible support, for heritage documentation. This article presents, in a first step, an approach for 2D/3D semantic annotations in a set of spatially-oriented photographs (whose positions and orientations in space are automatically estimated). In a second step, we will focus on a method for displaying those annotations on new images acquired by mobile devices in situ. Firstly, an automated image-based reconstruction method produces 3D information (specifically 3D coordinates) by processing a large images set. Then, images are semantically annotated and a process uses the previously generated 3D information inherent to images for the annotations transfer. As a consequence, this protocol provides a simple way to finely annotate a large quantity of images at once instead of one by one. As those images annotations are directly inherent to 3D information, they can be stored as 3D files. To bring up on screen the information related to a building, the user takes a picture in situ. An image processing method allows estimating the orientation parameters of this new photograph inside the already oriented large images base. Then the annotations can be precisely projected on the oriented picture and send back to the user. In this way a continuity of information could be established from the initial acquisition to the in situ visualization

    Online Markerless Augmented Reality for Remote Handling System in Bad Viewing Conditions

    Get PDF
    This thesis studies the development of Augmented Reality (AR) used in ITER mock-up remote handling environment. An important goal for employing an AR system is three-dimensional mapping of scene that provides the environmental position and orientation information for the operator. Remote Handling (RH) in harsh environments usually has to tackle the lack of sufficient visual feedback for the human operator due to limited numbers of on-site cameras and poor viewing angles etc. AR enables the user to perceive virtual computer-generated objects in a real scene, the most common goals usually including visibility enhancement and provision of extra information, such as positional data of various objects. The proposed AR system first, recognizes and locates the object by using the template-based matching algorithm and second step is to augment the virtual model on top of the found object. A tracking algorithm is exploited for locating the object in a sequence of frames. Conceptually, the template is found in each sequence by computing the similarity between the template and the image for all relevant poses (rotation and translation) of template. The objective of this thesis is to investigate if ITER remote handling at DTP2 (Divertor Test Platform 2) can benefit from AR technology. The AR interface specifies the measurement values, orientation and transformation of markerless WHMAN (Water Hydraulic Manipulator) in efficient real-time tracking. The performance of this AR system is tested with different positions and the method in this thesis was validated in a real remote handling environment at DTP2 and proved robust enough for it. /Kir1

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Robust Non-Rigid Registration with Reweighted Position and Transformation Sparsity

    Get PDF
    Non-rigid registration is challenging because it is ill-posed with high degrees of freedom and is thus sensitive to noise and outliers. We propose a robust non-rigid registration method using reweighted sparsities on position and transformation to estimate the deformations between 3-D shapes. We formulate the energy function with position and transformation sparsity on both the data term and the smoothness term, and define the smoothness constraint using local rigidity. The double sparsity based non-rigid registration model is enhanced with a reweighting scheme, and solved by transferring the model into four alternately-optimized subproblems which have exact solutions and guaranteed convergence. Experimental results on both public datasets and real scanned datasets show that our method outperforms the state-of-the-art methods and is more robust to noise and outliers than conventional non-rigid registration methods.Comment: IEEE Transactions on Visualization and Computer Graphic
    • …
    corecore