44 research outputs found

    A Microsoft HoloLens Mixed Reality Surgical Simulator for Patient-Specific Hip Arthroplasty Training

    Get PDF
    Surgical simulation can offer novice surgeons an opportunity to practice skills outside the operating theatre in a safe controlled environment. According to literature evidence, nowadays there are very few training simulators available for Hip Arthroplasty (HA). In a previous study we have presented a physical simulator based on a lower torso phantom including a patient-specific hemi-pelvis replica embedded in a soft synthetic foam. This work explores the use of Microsoft HoloLens technology to enrich the physical patient-specific simulation with the implementation of wearable mixed reality functionalities. Our HA multimodal simulator based on mixed reality using the HoloLens is described by illustrating the overall system, and by summarizing the main phases of the design and development. Finally, we present a preliminary qualitative study with seven subjects (5 medical students, and 2 orthopedic surgeons) showing encouraging results that suggest the suitability of the HoloLens for the proposed application. However, further studies need to be conducted to perform a quantitative test of the registration accuracy of the virtual content, and to confirm qualitative results in a larger cohort of subjects

    How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens

    Get PDF
    Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator

    Augmented Reality: Mapping Methods and Tools for Enhancing the Human Role in Healthcare HMI

    Get PDF
    Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols

    How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens

    Get PDF
    Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator

    Mixed Reality-Based Simulator for Training on Imageless Navigation Skills in Total Hip Replacement Procedures

    Get PDF
    Imageless navigation systems (INS) in orthopaedics have been used to improve the outcomes of several orthopaedic procedures such as total hip replacement [1, 2]. However, the increased surgical times and the associate learning curve discourage surgeons from using navigation systems in their theatres [2]. This paper presents a Mixed Reality (MR) simulator that helps surgeons acquire the infrared based navigation skills before performing it in reality. A group of 7 hip surgeons tried the application, expressing their satisfaction with all the features and confirmed that the simulator represents a cheaper and faster option to train surgeons in the use of INS than the current learning methods

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Machine learning and interactive real-time simulation for training on relevant total hip replacement skills.

    Get PDF
    Virtual Reality simulators have proven to be an excellent tool in the medical sector to help trainees mastering surgical abilities by providing them with unlimited training opportunities. Total Hip Replacement (THR) is a procedure that can benefit significantly from VR/AR training, given its non-reversible nature. From all the different steps required while performing a THR, doctors agree that a correct fitting of the acetabular component of the implant has the highest relevance to ensure successful outcomes. Acetabular reaming is the step during which the acetabulum is resurfaced and prepared to receive the acetabular implant. The success of this step is directly related to the success of fitting the acetabular component. Therefore, this thesis will focus on developing digital tools that can be used to assist the training of acetabular reaming. Devices such as navigation systems and robotic arms have proven to improve the final accuracy of the procedure. However, surgeons must learn to adapt their instrument movements to be recognised by infrared cameras. When surgeons are initially introduced to these systems, surgical times can be extended up to 20 minutes, maximising surgical risks. Training opportunities are sparse, given the high investment required to purchase these devices. As a cheaper alternative, we developed an Augmented Reality (AR) alternative for training on the calibration of imageless navigation systems (INS). At the time, there were no alternative simulators that using head-mounted displays to train users into the steps to calibrate such systems. Our simulator replicates the presence of an infrared camera and its interaction with the reflecting markers located on the surgical tools. A group of 6 hip surgeons were invited to test the simulator. All of them expressed their satisfaction with the ease of use and attractiveness of the simulator as well as the similarity of interaction with the real procedure. The study confirmed that our simulator represents a cheaper and faster option to train multiple surgeons simultaneously in the use of Imageless Navigation Systems (INS) than learning exclusively on the surgical theatre. Current reviews on simulators for orthopaedic surgical procedures lack objective metrics of assessment given a standard set of design requirements. Instead, most of them rely exclusively on the level of interaction and functionality provided. We propose a comparative assessment rubric based on three different evaluation criteria. Namely immersion, interaction fidelity, and applied learning theories. After our assessment, we found that none of the simulators available for THR provides an accurate interactive representation of resurfacing procedures such as acetabular reaming based on force inputs exerted by the user. This feature is indispensable for an orthopaedics simulator, given that hand-eye coordination skills are essential skills to be trained before performing non-reversible bone removal on real patients. Based on the findings of our comparative assessment, we decided to develop a model to simulate the physically-based deformation expected during traditional acetabular reaming, given the user’s interaction with a volumetric mesh. Current interactive deformation methods on high-resolution meshes are based on geometrical collision detection and do not consider the contribution of the materials’ physical properties. By ignoring the effect of the material mechanics and the force exerted by the user, they become inadequate for training on hand- eye coordination skills transferable to the surgical theatre. Volumetric meshes are preferred in surgical simulation to geometric ones, given that they are able to represent the internal evolution of deformable solids resulting from cutting and shearing operations. Existing numerical methods for representing linear and corotational FEM cuts can only maintain interactive framerates at a low resolution of the mesh. Therefore, we decided to train a machine-learning model to learn the continuum mechanic laws relevant to acetabular reaming and predict deformations at interactive framerates. To the best of our knowledge, no research has been done previously on training a machine learning model on non-elastic FEM data to achieve results at interactive framerates. As training data, we used the results from XFEM simulations precomputed over 5000 frames for plastic deformations on tetrahedral meshes with 20406 elements each. We selected XFEM simulation as the physically-based deformation ground-truth given its accuracy and fast convergence to represent cuts, discontinuities and large strain rates. Our machine learning-based interactive model was trained following the Graph Neural Networks (GNN) blocks. GNNs were selected to learn on tetrahedral meshes as other supervised-learning architectures like the Multilayer perceptron (MLP), and Convolutional neural networks (CNN) are unable to learn the relationships between entities with an arbitrary number of neighbours. The learned simulator identifies the elements to be removed on each frame and describes the accumulated stress evolution in the whole machined piece. Using data generated from the results of XFEM allowed us to embed the effects of non-linearities in our interactive simulations without extra processing time. The trained model executed the prediction task using our tetrahedral mesh and unseen reamer orientations faster per frame than the time required to generate the training FEM dataset. Given an unseen orientation of the reamer, the trained GN model updates the value of accumulated stress on each of the 20406 tetrahedral elements that constitute our mesh during the prediction task. Once this value is updated, the tetrahedrons to be removed from the mesh are identified using a threshold condition. After using each single-frame output as input for the following prediction repeatedly for up to 60 iterations, our model can maintain an accuracy of up to 90.8% in identifying the status of each element given their value of accumulated stress. Finally, we demonstrate how the developed estimator can be easily connected to any game engine and included in developing a fully functional hip arthroplasty simulator

    Hybrid Simulation and Planning Platform for Cryosurgery with Microsoft HoloLens

    Get PDF
    Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient's preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning

    Evaluation techniques used to evaluate extended reality (XR) head mounted displays (HMDs) used in healthcare: A literature review

    Get PDF
    Extended Reality (XR) Head Mounted Displays (HMDs) are used across various healthcare pathways for staff/student education and training, and for improving patient experiences. As XR HMDs become affordable, accessible and their acceptance increases, it is critical to document the techniques used for evaluating the technology, processes of user engagement and immersion, and outcomes. At present there is limited research on evaluation techniques used to evaluate XR HMDs. This manuscript presents findings from 104 clinical studies that use XR HMDs. The aim of this review is to give the user an insight into the current healthcare XR HMD landscape by presenting the different HMDs used, variety of XR interventions and their applications across medical pathways and intended research outcomes of the XR applications. The manuscript further guides the reader toward a detailed documentation of evaluation techniques used to investigate antecedents and consequences of using XR and delivers a critical discussion and suggestions for improvement of XR evaluation practices. This paper will be of excellent use to clinicians, academics, funding bodies and hospital decision makers who would like suggestions for evaluating the efficacy and effectiveness of XR HMDs. The authors hope to encourage discussions on the importance of improving XR evaluation practices

    Augmented and virtual reality in spine surgery, current applications and future potentials

    Get PDF
    BACKGROUND CONTEXT: The field of artificial intelligence (AI) is rapidly advancing, especially with recent improvements in deep learning (DL) techniques. Augmented (AR) and virtual reality (VR) are finding their place in healthcare, and spine surgery is no exception. The unique capabilities and advantages of AR and VR devices include their low cost, flexible integration with other technologies, user-friendly features and their application in navigation systems, which makes them beneficial across different aspects of spine surgery. Despite the use of AR for pedicle screw placement, targeted cervical foraminotomy, bone biopsy, osteotomy planning, and percutaneous intervention, the current applications of AR and VR in spine surgery remain limited. PURPOSE: The primary goal of this study was to provide the spine surgeons and clinical researchers with the general information about the current applications, future potentials, and accessibility of AR and VR systems in spine surgery. STUDY DESIGN/SETTING: We reviewed titles of more than 250 journal papers from google scholar and PubMed with search words: augmented reality, virtual reality, spine surgery, and orthopaedic, out of which 89 related papers were selected for abstract review. Finally, full text of 67 papers were analyzed and reviewed. METHODS: The papers were divided into four groups: technological papers, applications in surgery, applications in spine education and training, and general application in orthopaedic. A team of two reviewers performed paper reviews and a thorough web search to ensure the most updated state of the art in each of four group is captured in the review. RESULTS: In this review we discuss the current state of the art in AR and VR hardware, their preoperative applications and surgical applications in spine surgery. Finally, we discuss the future potentials of AR and VR and their integration with AI, robotic surgery, gaming, and wearables. CONCLUSIONS: AR and VR are promising technologies that will soon become part of standard of care in spine surgery. (C) 2021 Published by Elsevier Inc
    corecore