768 research outputs found

    Translation of Medical AR Research into Clinical Practice

    Get PDF
    Translational research is aimed at turning discoveries from basic science into results that advance patient treatment. The translation of technical solutions into clinical use is a complex, iterative process that involves different stages of design, development, and validation, such as the identification of unmet clinical needs, technical conception, development, verification and validation, regulatory matters, and ethics. For this reason, many promising technical developments at the interface of technology, informatics, and medicine remain research prototypes without finding their way into clinical practice. Augmented reality is a technology that is now making its breakthrough into patient care, even though it has been available for decades. In this work, we explain the translational process for Medical AR devices and present associated challenges and opportunities. To the best knowledge of the authors, this concept paper is the first to present a guideline for the translation of medical AR research into clinical practice

    Marker-free surgical navigation of rod bending using a stereo neural network and augmented reality in spinal fusion

    Full text link
    The instrumentation of spinal fusion surgeries includes pedicle screw placement and rod implantation. While several surgical navigation approaches have been proposed for pedicle screw placement, less attention has been devoted towards the guidance of patient-specific adaptation of the rod implant. We propose a marker-free and intuitive Augmented Reality (AR) approach to navigate the bending process required for rod implantation. A stereo neural network is trained from the stereo video streams of the Microsoft HoloLens in an end-to-end fashion to determine the location of corresponding pedicle screw heads. From the digitized screw head positions, the optimal rod shape is calculated, translated into a set of bending parameters, and used for guiding the surgeon with a novel navigation approach. In the AR-based navigation, the surgeon is guided step-by-step in the use of the surgical tools to achieve an optimal result. We have evaluated the performance of our method on human cadavers against two benchmark methods, namely conventional freehand bending and marker-based bending navigation in terms of bending time and rebending maneuvers. We achieved an average bending time of 231s with 0.6 rebending maneuvers per rod compared to 476s (3.5 rebendings) and 348s (1.1 rebendings) obtained by our freehand and marker-based benchmarks, respectively

    Advanced cranial navigation

    Get PDF
    Neurosurgery is performed with extremely low margins of error. Surgical inaccuracy may have disastrous consequences. The overall aim of this thesis was to improve accuracy in cranial neurosurgical procedures by the application of new technical aids. Two technical methods were evaluated: augmented reality (AR) for surgical navigation (Papers I-II) and the optical technique of diffuse reflectance spectroscopy (DRS) for real-time tissue identification (Papers III-V). Minimally invasive skull-base endoscopy has several potential benefits compared to traditional craniotomy, but approaching the skull base through this route implies that at-risk organs and surgical targets are covered by bone and out of the surgeon’s direct line of sight. In Paper I, a new application for AR-navigated endoscopic skull-base surgery, based on an augmented-reality surgical navigation (ARSN) system, was developed. The accuracy of the system, defined by mean target registration error (TRE), was evaluated and found to be 0.55±0.24 mm, the lowest value reported error in the literature. As a first step toward the development of a cranial application for AR navigation, in Paper II this ARSN system was used to enable insertions of biopsy needles and external ventricular drainages (EVDs). The technical accuracy (i.e., deviation from the target or intended path) and efficacy (i.e., insertion time) were assessed on a 3D-printed realistic, anthropomorphic skull and brain phantom; Thirty cranial biopsies and 10 EVD insertions were performed. Accuracy for biopsy was 0.8±0.43 mm with a median insertion time of 149 (87-233) seconds, and for EVD accuracy was 2.9±0.8 mm at the tip with a median angular deviation of 0.7±0.5° and a median insertion time of 188 (135-400) seconds. Glial tumors grow diffusely in the brain, and patient survival is correlated with the extent of tumor removal. Tumor borders are often invisible. Resection beyond borders as defined by conventional methods may further improve a patient’s prognosis. In Paper III, DRS was evaluated for discrimination between glioma and normal brain tissue ex vivo. DRS spectra and histology were acquired from 22 tumor samples and 9 brain tissue samples retrieved from 30 patients. Sensitivity and specificity for the detection of low-grade gliomas were 82.0% and 82.7%, respectively, with an AUC of 0.91. Acute ischemic stroke caused by large vessel occlusion is treated with endovascular thrombectomy, but treatment failure can occur when clot composition and thrombectomy technique are mismatched. Intra-procedural knowledge of clot composition could guide the choice of treatment modality. In Paper IV, DRS, in vivo, was evaluated for intravascular clot characterization. Three types of clot analogs, red blood cell (RBC)-rich, fibrin-rich and mixed clots, were injected into the external carotids of a domestic pig. An intravascular DRS probe was used for in-situ measurements of clots, blood, and vessel walls, and the spectral data were analyzed. DRS could differentiate clot types, vessel walls, and blood in vivo (p<0,001). The sensitivity and specificity for detection were 73.8% and 98.8% for RBC clots, 100% and 100% for mixed clots, and 80.6% and 97.8% for fibrin clots, respectively. Paper V evaluated DRS for characterization of human clot composition ex vivo: 45 clot units were retrieved from 29 stroke patients and examined with DRS and histopathological evaluation. DRS parameters correlated with clot RBC fraction (R=81, p<0.001) and could be used for the classification of clot type with sensitivity and specificity rates for the detection of RBC-rich clots of 0.722 and 0.846, respectively. Applied in an intravascular probe, DRS may provide intra-procedural information on clot composition to improve endovascular thrombectomy efficiency

    Augmented Reality: Mapping Methods and Tools for Enhancing the Human Role in Healthcare HMI

    Get PDF
    Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols

    Augmented Reality Visualization for Image-Guided Surgery:A Validation Study Using a Three-Dimensional Printed Phantom

    Get PDF
    Background Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MM). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. Purpose This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared. Results Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P = .034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P = .087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5). Conclusion The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup. (C) 2021 The Author. Published by Elsevier Inc. on behalf of The American Association of Oral and Maxillofacial Surgeons

    Real-time integration between Microsoft HoloLens 2 and 3D Slicer with demonstration in pedicle screw placement planning

    Get PDF
    We established a direct communication channel between Microsoft HoloLens 2 and 3D Slicer to exchange transform and image messages between the platforms in real time. This allows us to seamlessly display a CT reslice of a patient in the AR world.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research supported by projects PI122/00601 and AC20/00102 (Ministerio de Ciencia, Innovación y Universidades, Instituto de Salud Carlos III, Asociación Española Contra el Cáncer and European Regional Development Fund “Una manera de hacer Europa”), project PerPlanRT (ERA Permed), TED2021-129392B-I00 and TED2021-132200B-I00 (MCIN/AEI/10.13039/501100011033 and European Union “NextGenerationEU”/PRTR) and EU Horizon 2020 research and innovation programme Conex plus UC3M (grant agreement 801538). APC funded by Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Augmented reality-guided pelvic osteotomy of Ganz: feasibility in cadavers

    Get PDF
    INTRODUCTION The periacetabular osteotomy is a technically demanding procedure with the goal to improve the osseous containment of the femoral head. The options for controlled execution of the osteotomies and verification of the acetabular reorientation are limited. With the assistance of augmented reality, new possibilities are emerging to guide this intervention. However, the scientific knowledge regarding AR navigation for PAO is sparse. METHODS In this cadaveric study, we wanted to find out, if the execution of this complex procedure is feasible with AR guidance, quantify the accuracy of the execution of the three-dimensional plan, and find out what has to be done to proceed to real surgery. Therefore, an AR guidance for the PAO was developed and applied on 14 human hip cadavers. The guidance included performance of the four osteotomies and reorientation of the acetabular fragment. The osteotomy starting points, the orientation of the osteotomy planes, as well as the reorientation of the acetabular fragment were compared to the 3D planning. RESULTS The mean 3D distance between planned and performed starting points was between 9 and 17 mm. The mean angle between planned and performed osteotomies was between 6° and 7°. The mean reorientation error between the planned and performed rotation of the acetabular fragment was between 2° and 11°. CONCLUSION The planned correction can be achieved with promising accuracy and without serious errors. Further steps for a translation from the cadaver to the patient have been identified and must be addressed in future work

    Augmented navigation

    Get PDF
    Spinal fixation procedures have the inherent risk of causing damage to vulnerable anatomical structures such as the spinal cord, nerve roots, and blood vessels. To prevent complications, several technological aids have been introduced. Surgical navigation is the most widely used, and guides the surgeon by providing the position of the surgical instruments and implants in relation to the patient anatomy based on radiographic images. Navigation can be extended by the addition of a robotic arm to replace the surgeon’s hand to increase accuracy. Another line of surgical aids is tissue sensing equipment, that recognizes different tissue types and provides a warning system built into surgical instruments. All these technologies are under continuous development and the optimal solution is yet to be found. The aim of this thesis was to study the use of Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), and tissue sensing technology in spinal navigation to improve precision and prevent surgical errors. The aim of Paper I was to develop and validate an algorithm for automatizing the intraoperative planning of pedicle screws. An AI algorithm for automatic segmentation of the spine, and screw path suggestion was developed and evaluated. In a clinical study of advanced deformity cases, the algorithm could provide correct suggestions for 86% of all pedicles—or 95%, when cases with extremely altered anatomy were excluded. Paper II evaluated the accuracy of pedicle screw placement using a novel augmented reality surgical navigation (ARSN) system, harboring the above-developed algorithm. Twenty consecutively enrolled patients, eligible for deformity correction surgery in the thoracolumbar region, were operated on using the ARSN system. In this cohort, we found a pedicle screw placement accuracy of 94%, as measured according to the Gertzbein grading scale. The primary goal of Paper III was to validate an extension of the ARSN system for placing pedicle screws using instrument tracking and VR. In a porcine cadaver model, it was demonstrated that VR instrument tracking could successfully be integrated with the ARSN system, resulting in pedicle devices placed within 1.7 ± 1.0 mm of the planed path. Paper IV examined the feasibility of a robot-guided system for semi-automated, minimally invasive, pedicle screw placement in a cadaveric model. Using the robotic arm, pedicle devices were placed within 0.94 ± 0.59 mm of the planned path. The use of a semi-automated surgical robot was feasible, providing a higher technical accuracy compared to non-robotic solutions. Paper V investigated the use of a tissue sensing technology, diffuse reflectance spectroscopy (DRS), for detecting the cortical bone boundary in vertebrae during pedicle screw insertions. The technology could accurately differentiate between cancellous and cortical bone and warn the surgeon before a cortical breach. Using machine learning models, the technology demonstrated a sensitivity of 98% [range: 94-100%] and a specificity of 98% [range: 91-100%]. In conclusion, several technological aids can be used to improve accuracy during spinal fixation procedures. In this thesis, the advantages of adding AR, VR, AI and tissue sensing technology to conventional navigation solutions were studied

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach
    • …
    corecore