74 research outputs found

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Mixed reality visualization in shoulder arthroplasty: is it better than traditional preoperative planning software?

    Get PDF
    Background Preoperative traditional software planning (TSP) is a method used to assist surgeons with implant selection and glenoid guide-pin insertion in shoulder arthroplasty. Mixed reality (MR) is a new technology that uses digital holograms of the preoperative plan and guide-pin trajectory projected into the operative field. The purpose of this study was to compare TSP to MR in a simulated surgical environment involving insertion of guide-pins into models of severely deformed glenoids. Methods Eight surgeons inserted guide-pins into eight randomized three-dimensional-printed severely eroded glenoid models in a simulated surgical environment using either TSP or MR. In total, 128 glenoid models were used and statistically compared. The outcomes compared between techniques included procedural time, difference in guide-pin start point, difference in version and inclination, and surgeon confidence via a confidence rating scale. Results When comparing traditional preoperative software planning to MR visualization as techniques to assist surgeons in glenoid guide pin insertion, there were no statistically significant differences in terms of mean procedure time (P=0.634), glenoid start-point (TSP=2.2±0.2 mm, MR=2.1±0.1 mm; P=0.760), guide-pin orientation (P=0.586), or confidence rating score (P=0.850). Conclusions The results demonstrate that there were no significant differences between traditional preoperative software planning and MR visualization for guide-pin placement into models of eroded glenoids. A perceived benefit of MR is the real-time intraoperative visibility of the surgical plan and the patient’s anatomy; however, this did not translate into decreased procedural time or improved guide-pin position. Level of evidence Basic science study, biomechanics

    The HoloLens in Medicine: A systematic Review and Taxonomy

    Full text link
    The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality display, is the main player in the recent boost in medical augmented reality research. In medical settings, the HoloLens enables the physician to obtain immediate insight into patient information, directly overlaid with their view of the clinical scenario, the medical student to gain a better understanding of complex anatomies or procedures, and even the patient to execute therapeutic tasks with improved, immersive guidance. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021, were attention is shifting towards it's successor, the HoloLens 2. We identified 171 relevant publications through a systematic search of the PubMed and Scopus databases. We analyze these publications in regard to their intended use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation. We find that, although the feasibility of using the HoloLens in various medical scenarios has been shown, increased efforts in the areas of precision, reliability, usability, workflow and perception are necessary to establish AR in clinical practice.Comment: 35 pages, 11 figure

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Fluorescence-guided surgical system using holographic display: From phantom studies to canine patients

    Get PDF
    SIGNIFICANCE: Holographic display technology is a promising area of research that can lead to significant advancements in cancer surgery. We present the benefits of combining bioinspired multispectral imaging technology with holographic goggles for fluorescence-guided cancer surgery. Through a series of experiments with 43D-printed phantoms, small animal models of cancer, and surgeries on canine patients with head and neck cancer, we showcase the advantages of this holistic approach. AIM: The aim of our study is to demonstrate the feasibility and potential benefits of utilizing holographic display for fluorescence-guided surgery through a series of experiments involving 3D-printed phantoms and canine patients with head and neck cancer. APPROACH: We explore the integration of a bioinspired camera with a mixed reality headset to project fluorescent images as holograms onto a see-through display, and we demonstrate the potential benefits of this technology through benchtop and RESULTS: Our complete imaging and holographic display system showcased improved delineation of fluorescent targets in phantoms compared with the 2D monitor display approach and easy integration into the veterinarian surgical workflow. CONCLUSIONS: Based on our findings, it is evident that our comprehensive approach, which combines a bioinspired multispectral imaging sensor with holographic goggles, holds promise in enhancing the presentation of fluorescent information to surgeons during intraoperative scenarios while minimizing disruptions

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Acceptance and Continuance Usage Intention of Mixed Reality for Australian Healthcare Interprofessional Education

    Get PDF
    Virtual-Reality and augmented-reality are becoming innovative teaching and learning approaches across many industries, including healthcare, especially during the COVID-19 pandemic. However, the adoption rate of this technology is very low, especially in Australian healthcare Interprofessional Education. This study investigates factors influencing adoption and use of mixed-reality technology for Australian healthcare IPE. In this study, a theoretical model based on the Expectation and Confirmation Model and Task Technology Fit is developed and will be tested to determine Australian healthcare professionals’ intentions to continue using mixed-reality for Interprofessional Education through three validated surveys using a voluntary non-probability sampling strategy, over a 10-week period, targeting 124 healthcare professionals at the Tweed hospital, NSW Australia. The research outcome will assist in determining the validity of the proposed hybrid model in the context of MR healthcare training. It may assist in developing a more suitable theoretical framework and future characteristics of MR for healthcare training
    • …
    corecore