530 research outputs found

    Advanced cranial navigation

    Get PDF
    Neurosurgery is performed with extremely low margins of error. Surgical inaccuracy may have disastrous consequences. The overall aim of this thesis was to improve accuracy in cranial neurosurgical procedures by the application of new technical aids. Two technical methods were evaluated: augmented reality (AR) for surgical navigation (Papers I-II) and the optical technique of diffuse reflectance spectroscopy (DRS) for real-time tissue identification (Papers III-V). Minimally invasive skull-base endoscopy has several potential benefits compared to traditional craniotomy, but approaching the skull base through this route implies that at-risk organs and surgical targets are covered by bone and out of the surgeon’s direct line of sight. In Paper I, a new application for AR-navigated endoscopic skull-base surgery, based on an augmented-reality surgical navigation (ARSN) system, was developed. The accuracy of the system, defined by mean target registration error (TRE), was evaluated and found to be 0.55±0.24 mm, the lowest value reported error in the literature. As a first step toward the development of a cranial application for AR navigation, in Paper II this ARSN system was used to enable insertions of biopsy needles and external ventricular drainages (EVDs). The technical accuracy (i.e., deviation from the target or intended path) and efficacy (i.e., insertion time) were assessed on a 3D-printed realistic, anthropomorphic skull and brain phantom; Thirty cranial biopsies and 10 EVD insertions were performed. Accuracy for biopsy was 0.8±0.43 mm with a median insertion time of 149 (87-233) seconds, and for EVD accuracy was 2.9±0.8 mm at the tip with a median angular deviation of 0.7±0.5° and a median insertion time of 188 (135-400) seconds. Glial tumors grow diffusely in the brain, and patient survival is correlated with the extent of tumor removal. Tumor borders are often invisible. Resection beyond borders as defined by conventional methods may further improve a patient’s prognosis. In Paper III, DRS was evaluated for discrimination between glioma and normal brain tissue ex vivo. DRS spectra and histology were acquired from 22 tumor samples and 9 brain tissue samples retrieved from 30 patients. Sensitivity and specificity for the detection of low-grade gliomas were 82.0% and 82.7%, respectively, with an AUC of 0.91. Acute ischemic stroke caused by large vessel occlusion is treated with endovascular thrombectomy, but treatment failure can occur when clot composition and thrombectomy technique are mismatched. Intra-procedural knowledge of clot composition could guide the choice of treatment modality. In Paper IV, DRS, in vivo, was evaluated for intravascular clot characterization. Three types of clot analogs, red blood cell (RBC)-rich, fibrin-rich and mixed clots, were injected into the external carotids of a domestic pig. An intravascular DRS probe was used for in-situ measurements of clots, blood, and vessel walls, and the spectral data were analyzed. DRS could differentiate clot types, vessel walls, and blood in vivo (p<0,001). The sensitivity and specificity for detection were 73.8% and 98.8% for RBC clots, 100% and 100% for mixed clots, and 80.6% and 97.8% for fibrin clots, respectively. Paper V evaluated DRS for characterization of human clot composition ex vivo: 45 clot units were retrieved from 29 stroke patients and examined with DRS and histopathological evaluation. DRS parameters correlated with clot RBC fraction (R=81, p<0.001) and could be used for the classification of clot type with sensitivity and specificity rates for the detection of RBC-rich clots of 0.722 and 0.846, respectively. Applied in an intravascular probe, DRS may provide intra-procedural information on clot composition to improve endovascular thrombectomy efficiency

    Augmented and virtual reality in spine surgery, current applications and future potentials

    Get PDF
    BACKGROUND CONTEXT: The field of artificial intelligence (AI) is rapidly advancing, especially with recent improvements in deep learning (DL) techniques. Augmented (AR) and virtual reality (VR) are finding their place in healthcare, and spine surgery is no exception. The unique capabilities and advantages of AR and VR devices include their low cost, flexible integration with other technologies, user-friendly features and their application in navigation systems, which makes them beneficial across different aspects of spine surgery. Despite the use of AR for pedicle screw placement, targeted cervical foraminotomy, bone biopsy, osteotomy planning, and percutaneous intervention, the current applications of AR and VR in spine surgery remain limited. PURPOSE: The primary goal of this study was to provide the spine surgeons and clinical researchers with the general information about the current applications, future potentials, and accessibility of AR and VR systems in spine surgery. STUDY DESIGN/SETTING: We reviewed titles of more than 250 journal papers from google scholar and PubMed with search words: augmented reality, virtual reality, spine surgery, and orthopaedic, out of which 89 related papers were selected for abstract review. Finally, full text of 67 papers were analyzed and reviewed. METHODS: The papers were divided into four groups: technological papers, applications in surgery, applications in spine education and training, and general application in orthopaedic. A team of two reviewers performed paper reviews and a thorough web search to ensure the most updated state of the art in each of four group is captured in the review. RESULTS: In this review we discuss the current state of the art in AR and VR hardware, their preoperative applications and surgical applications in spine surgery. Finally, we discuss the future potentials of AR and VR and their integration with AI, robotic surgery, gaming, and wearables. CONCLUSIONS: AR and VR are promising technologies that will soon become part of standard of care in spine surgery. (C) 2021 Published by Elsevier Inc

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Virtual and Augmented Reality in Medical Education

    Get PDF
    Virtual reality (VR) and augmented reality (AR) are two contemporary simulation models that are currently upgrading medical education. VR provides a 3D and dynamic view of structures and the ability of the user to interact with them. The recent technological advances in haptics, display systems, and motion detection allow the user to have a realistic and interactive experience, enabling VR to be ideal for training in hands-on procedures. Consequently, surgical and other interventional procedures are the main fields of application of VR. AR provides the ability of projecting virtual information and structures over physical objects, thus enhancing or altering the real environment. The integration of AR applications in the understanding of anatomical structures and physiological mechanisms seems to be beneficial. Studies have tried to demonstrate the validity and educational effect of many VR and AR applications, in many different areas, employed via various hardware platforms. Some of them even propose a curriculum that integrates these methods. This chapter provides a brief history of VR and AR in medicine, as well as the principles and standards of their function. Finally, the studies that show the effect of the implementation of these methods in different fields of medical training are summarized and presented

    Enabling technologies for MRI guided interventional procedures

    Get PDF
    This dissertation addresses topics related to developing interventional assistant devices for Magnetic Resonance Imaging (MRI). MRI can provide high-quality 3D visualization of target anatomy and surrounding tissue, but the benefits can not be readily harnessed for interventional procedures due to difficulties associated with the use of high-field (1.5T or greater) MRI. Discussed are potential solutions to the inability to use conventional mecha- tronics and the confined physical space in the scanner bore. This work describes the development of two apparently dissimilar systems that repre- sent different approaches to the same surgical problem - coupling information and action to perform percutaneous (through the skin) needle placement with MR imaging. The first system addressed takes MR images and projects them along with a surgical plan directly on the interventional site, thus providing in-situ imaging. With anatomical images and a corresponding plan visible in the appropriate pose, the clinician can use this information to perform the surgical action. My primary research effort has focused on a robotic assistant system that overcomes the difficulties inherent to MR-guided procedures, and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot is a servo pneumatically operated automatic needle guide, and effectively guides needles under real- time MR imaging. This thesis describes development of the robotic system including requirements, workspace analysis, mechanism design and optimization, and evaluation of MR compatibility. Further, a generally applicable MR-compatible robot controller is de- veloped, the pneumatic control system is implemented and evaluated, and the system is deployed in pre-clinical trials. The dissertation concludes with future work and lessons learned from this endeavor

    Image guided robotic assistance for the diagnosis and treatment of tumor

    Get PDF
    The aim of this thesis is to demonstrate the feasibility and the potentiality of introduction of robotics and image guidance in the overall oncologic workflow, from the diagnosis to the treatment phase. The popularity of robotics in the operating room has grown in recent years. Currently the most popular systems is the da Vinci telemanipulator (Intuitive Surgical), it is based on a master-slave control, for minimally invasive surgery and it is used in several surgical fields such us urology, general, gynecology, cardiothoracic. An accurate study of this system, from a technological field of view, has been conducted addressing all drawbacks and advantages of this system. The da Vinci System creates an immersive operating environment for the surgeon by providing both high quality stereo visualization and a human-machine interface that directly connects the surgeon’s hands to the motion of the surgical tool tips inside the patient’s body. It has undoubted advantages for the surgeon work and for the patient health, at least for some interventions, while its very high costs leaves many doubts on its price benefit ratio. In the robotic surgery field many researchers are working on the optimization and miniaturization robots mechanic, while others are trying to obtain smart functionalities to realize robotic systems, that, “knowing” the patient anatomy from radiological images, can assists the surgeon in an active way. Regarding the second point, image guided systems can be useful to plan and to control medical robots motion and to provide the surgeon pre-operative and intra-operative images with augmented reality visualization to enhance his/her perceptual capacities and, as a consequence, to improve the quality of treatments. To demonstrate this thesis some prototypes has been designed, implemented and tested. The development of image guided medical devices, comprehensive of augmented reality, virtual navigation and robotic surgical features, requires to address several problems. The first ones are the choosing of the robotic platform and of the image source to employ. An industrial anthropomorphic arm has been used as testing platform. The idea of integrating industrial robot components in the clinical workflow has been supported by the da Vinci technical analysis. The algorithms and methods developed, regarding in particular robot calibration, based on literature theories and on an easily integration in the clinical scenario, can be adapted to each anthropomorphic arm. In this way this work can be integrated with light-weight robots, for industrial or clinical use, able to work in close contact to humans, which will become numerous in the early future. Regarding the medical image source, it has been decided to work with ultrasound imaging. Two-dimensional ultrasound imaging is widely used in clinical practice because is not dangerous for the patient, inexpensive, compact and it is a highly flexible imaging that allows users to study many anatomic structures. It is routinely used for diagnosis and as guidance in percutaneous treatments. However the use of 2D ultrasound imaging presents some disadvantages that require great ability of the user: it requires that the clinician mentally integrates many images to reconstruct a complete idea of the anatomy in 3D. Furthermore the freehand control of the probe make it difficult to individuate anatomic positions and orientations and probe repositioning to reach a particular location. To overcome these problems it has been developed an image guided system that fuse 2D US real time images with routinely CT or MRI 3D images, previously acquired from the patient, to enhance clinician orientation and probe guidance. The implemented algorithms for robot calibration and US image guidance has been used to realize two applications responding to specific clinical needs. The first one to speed up the execution of routinely and very recurrently procedures like percutaneous biopsy or ablation. The second one to improve a new completely non invasive type of treatment for solid tumors, the HIFU (High Intensity Focused Ultrasound). An ultrasound guided robotic system has been developed to assist the clinician to execute complicated biopsies, or percutaneous ablations, in particular for deep abdominal organs. It was developed an integrated system that provides the clinician two types of assistance: a mixed reality visualization allows accurate and easy planning of needle trajectory and target reaching verification; the robot arm equipped with a six-degree-of-freedom force sensor allows the precise positioning of the needle holder and allows the clinician to adjust, by means of a cooperative control, the planned trajectory to overcome needle deflection and target motion. The second application consists in an augmented reality navigation system for HIFU treatment. HIFU represents a completely non invasive method for treatment of solid tumors, hemostasis and other vascular features in human tissues. The technology for HIFU treatments is still evolving and the systems available on the market have some limitations and drawbacks. A disadvantage resulting from our experience with the machinery available in our hospital (JC200 therapeutic system Haifu (HIFU) by Tech Co., Ltd, Chongqing), which is similar to other analogous machines, is the long time required to perform the procedure due to the difficulty to find the target, using the remote motion of an ultrasound probe under the patient. This problem has been addressed developing an augmented reality navigation system to enhance US guidance during HIFU treatments allowing an easy target localization. The system was implemented using an additional free hand ultrasound probe coupled with a localizer and CT fused imaging. It offers a simple and an economic solution to an easy HIFU target localization. This thesis demonstrates the utility and usability of robots for diagnosis and treatment of the tumor, in particular the combination of automatic positioning and cooperative control allows the surgeon and the robot to work in synergy. Further the work demonstrates the feasibility and the potentiality of the use of a mixed reality navigation system to facilitate the target localization and consequently to reduce the times of sittings, to increase the number of possible diagnosis/treatments and to decrease the risk of potential errors. The proposed solutions for the integration of robotics and image guidance in the overall oncologic workflow, take into account current available technologies, traditional clinical procedures and cost minimization

    US & MR Image-Fusion Based on Skin Co-Registration

    Full text link
    The study and development of innovative solutions for the advanced visualisation, representation and analysis of medical images offer different research directions. Current practice in medical imaging consists in combining real-time US with imaging modalities that allow internal anatomy acquisitions, such as CT, MRI, PET or similar. Application of image-fusion approaches can be found in tracking surgical tools and/or needles, in real-time during interventions. Thus, this work proposes a fusion imaging system for the registration of CT and MRI images with real-time US acquisition leveraging a 3D camera sensor. The main focus of the work is the portability of the system and its applicability to different anatomical districts

    Augmented Reality Ultrasound Guidance in Anesthesiology

    Get PDF
    Real-time ultrasound has become a mainstay in many image-guided interventions and increasingly popular in several percutaneous procedures in anesthesiology. One of the main constraints of ultrasound-guided needle interventions is identifying and distinguishing the needle tip from needle shaft in the image. Augmented reality (AR) environments have been employed to address challenges surrounding surgical tool visualization, navigation, and positioning in many image-guided interventions. The motivation behind this work was to explore the feasibility and utility of such visualization techniques in anesthesiology to address some of the specific limitations of ultrasound-guided needle interventions. This thesis brings together the goals, guidelines, and best development practices of functional AR ultrasound image guidance (AR-UIG) systems, examines the general structure of such systems suitable for applications in anesthesiology, and provides a series of recommendations for their development. The main components of such systems, including ultrasound calibration and system interface design, as well as applications of AR-UIG systems for quantitative skill assessment, were also examined in this thesis. The effects of ultrasound image reconstruction techniques, as well as phantom material and geometry on ultrasound calibration, were investigated. Ultrasound calibration error was reduced by 10% with synthetic transmit aperture imaging compared with B-mode ultrasound. Phantom properties were shown to have a significant effect on calibration error, which is a variable based on ultrasound beamforming techniques. This finding has the potential to alter how calibration phantoms are designed cognizant of the ultrasound imaging technique. Performance of an AR-UIG guidance system tailored to central line insertions was evaluated in novice and expert user studies. While the system outperformed ultrasound-only guidance with novice users, it did not significantly affect the performance of experienced operators. Although the extensive experience of the users with ultrasound may have affected the results, certain aspects of the AR-UIG system contributed to the lackluster outcomes, which were analyzed via a thorough critique of the design decisions. The application of an AR-UIG system in quantitative skill assessment was investigated, and the first quantitative analysis of needle tip localization error in ultrasound in a simulated central line procedure, performed by experienced operators, is presented. Most participants did not closely follow the needle tip in ultrasound, resulting in 42% unsuccessful needle placements and a 33% complication rate. Compared to successful trials, unsuccessful procedures featured a significantly greater (p=0.04) needle-tip to image-plane distance. Professional experience with ultrasound does not necessarily lead to expert level performance. Along with deliberate practice, quantitative skill assessment may reinforce clinical best practices in ultrasound-guided needle insertions. Based on the development guidelines, an AR-UIG system was developed to address the challenges in ultrasound-guided epidural injections. For improved needle positioning, this system integrated A-mode ultrasound signal obtained from a transducer housed at the tip of the needle. Improved needle navigation was achieved via enhanced visualization of the needle in an AR environment, in which B-mode and A-mode ultrasound data were incorporated. The technical feasibility of the AR-UIG system was evaluated in a preliminary user study. The results suggested that the AR-UIG system has the potential to outperform ultrasound-only guidance

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
    • …
    corecore