3,630 research outputs found

    An Open-Source 7-Axis, Robotic Platform to Enable Dexterous Procedures within CT Scanners

    Full text link
    This paper describes the design, manufacture, and performance of a highly dexterous, low-profile, 7 Degree-of-Freedom (DOF) robotic arm for CT-guided percutaneous needle biopsy. Direct CT guidance allows physicians to localize tumours quickly; however, needle insertion is still performed by hand. This system is mounted to a fully active gantry superior to the patient's head and teleoperated by a radiologist. Unlike other similar robots, this robot's fully serial-link approach uses a unique combination of belt and cable drives for high-transparency and minimal-backlash, allowing for an expansive working area and numerous approach angles to targets all while maintaining a small in-bore cross-section of less than 16cm216cm^2. Simulations verified the system's expansive collision free work-space and ability to hit targets across the entire chest, as required for lung cancer biopsy. Targeting error is on average <1mm<1mm on a teleoperated accuracy task, illustrating the system's sufficient accuracy to perform biopsy procedures. The system is designed for lung biopsies due to the large working volume that is required for reaching peripheral lung lesions, though, with its large working volume and small in-bore cross-sectional area, the robotic system is effectively a general-purpose CT-compatible manipulation device for percutaneous procedures. Finally, with the considerable development time undertaken in designing a precise and flexible-use system and with the desire to reduce the burden of other researchers in developing algorithms for image-guided surgery, this system provides open-access, and to the best of our knowledge, is the first open-hardware image-guided biopsy robot of its kind.Comment: 8 pages, 9 figures, final submission to IROS 201

    Augmented Reality Ultrasound Guidance in Anesthesiology

    Get PDF
    Real-time ultrasound has become a mainstay in many image-guided interventions and increasingly popular in several percutaneous procedures in anesthesiology. One of the main constraints of ultrasound-guided needle interventions is identifying and distinguishing the needle tip from needle shaft in the image. Augmented reality (AR) environments have been employed to address challenges surrounding surgical tool visualization, navigation, and positioning in many image-guided interventions. The motivation behind this work was to explore the feasibility and utility of such visualization techniques in anesthesiology to address some of the specific limitations of ultrasound-guided needle interventions. This thesis brings together the goals, guidelines, and best development practices of functional AR ultrasound image guidance (AR-UIG) systems, examines the general structure of such systems suitable for applications in anesthesiology, and provides a series of recommendations for their development. The main components of such systems, including ultrasound calibration and system interface design, as well as applications of AR-UIG systems for quantitative skill assessment, were also examined in this thesis. The effects of ultrasound image reconstruction techniques, as well as phantom material and geometry on ultrasound calibration, were investigated. Ultrasound calibration error was reduced by 10% with synthetic transmit aperture imaging compared with B-mode ultrasound. Phantom properties were shown to have a significant effect on calibration error, which is a variable based on ultrasound beamforming techniques. This finding has the potential to alter how calibration phantoms are designed cognizant of the ultrasound imaging technique. Performance of an AR-UIG guidance system tailored to central line insertions was evaluated in novice and expert user studies. While the system outperformed ultrasound-only guidance with novice users, it did not significantly affect the performance of experienced operators. Although the extensive experience of the users with ultrasound may have affected the results, certain aspects of the AR-UIG system contributed to the lackluster outcomes, which were analyzed via a thorough critique of the design decisions. The application of an AR-UIG system in quantitative skill assessment was investigated, and the first quantitative analysis of needle tip localization error in ultrasound in a simulated central line procedure, performed by experienced operators, is presented. Most participants did not closely follow the needle tip in ultrasound, resulting in 42% unsuccessful needle placements and a 33% complication rate. Compared to successful trials, unsuccessful procedures featured a significantly greater (p=0.04) needle-tip to image-plane distance. Professional experience with ultrasound does not necessarily lead to expert level performance. Along with deliberate practice, quantitative skill assessment may reinforce clinical best practices in ultrasound-guided needle insertions. Based on the development guidelines, an AR-UIG system was developed to address the challenges in ultrasound-guided epidural injections. For improved needle positioning, this system integrated A-mode ultrasound signal obtained from a transducer housed at the tip of the needle. Improved needle navigation was achieved via enhanced visualization of the needle in an AR environment, in which B-mode and A-mode ultrasound data were incorporated. The technical feasibility of the AR-UIG system was evaluated in a preliminary user study. The results suggested that the AR-UIG system has the potential to outperform ultrasound-only guidance

    Personalized medicine in surgical treatment combining tracking systems, augmented reality and 3D printing

    Get PDF
    Mención Internacional en el título de doctorIn the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have previously supported this individualized medicine in many ways. However, their independent contributions show several limitations in terms of patient-to-image registration, lack of flexibility to adapt to the requirements of each case, large preoperative planning times, and navigation complexity. The main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows. In the first part of this dissertation, we present a novel framework for acral tumor resection combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that maintained the same position of the distal extremity during image-guided surgery as in the preoperative images. The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D printed phantoms. Surgical navigation was feasible during both surgeries, allowing surgeons to verify the tumor resection margin. Then, we propose and augmented reality navigation system that uses 3D printed surgical guides with a tracking pattern enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing’s sarcoma, and then tested during the actual surgical intervention. The results showed that the surgical guide with the reference marker can be placed precisely with an accuracy of 2 mm and a visualization error lower than 3 mm. The application allowed physicians to visualize the skin, bone, tumor and medical images overlaid on the phantom and patient. To enable the use of AR and 3D printing by inexperienced users without broad technical knowledge, we designed a step-by-step methodology. The proposed protocol describes how to develop an AR smartphone application that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. Our solution brings AR solutions closer to the final clinical user, combining free and open-source software with an open-access protocol. The proposed guide is already helping to accelerate the adoption of these technologies by medical professionals and researchers. In the next section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We designed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location. A 3D printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow on six realistic phantoms achieving a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience In the final section, two surgical navigation systems have been developed and evaluated to guide electrode placement in sacral neurostimulation procedures based on optical tracking and augmented reality. Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures. Additionally, we proposed a feasible clinical workflow for guiding SNS interventions with both navigation methodologies, including automatically creating sacral virtual 3D models for trajectory definition using artificial intelligence and intraoperative patient-to-image registration. To conclude, in this thesis we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretaria: María Arrate Muñoz Barrutia.- Vocal: Csaba Pinte

    Advanced tracking and image registration techniques for intraoperative radiation therapy

    Get PDF
    Mención Internacional en el título de doctorIntraoperative electron radiation therapy (IOERT) is a technique used to deliver radiation to the surgically opened tumor bed without irradiating healthy tissue. Treatment planning systems and mobile linear accelerators enable clinicians to optimize the procedure, minimize stress in the operating room (OR) and avoid transferring the patient to a dedicated radiation room. However, placement of the radiation collimator over the tumor bed requires a validation methodology to ensure correct delivery of the dose prescribed in the treatment planning system. In this dissertation, we address three well-known limitations of IOERT: applicator positioning over the tumor bed, docking of the mobile linear accelerator gantry with the applicator and validation of the dose delivery prescribed. This thesis demonstrates that these limitations can be overcome by positioning the applicator appropriately with respect to the patient’s anatomy. The main objective of the study was to assess technological and procedural alternatives for improvement of IOERT performance and resolution of problems of uncertainty. Image-to-world registration, multicamera optical trackers, multimodal imaging techniques and mobile linear accelerator docking are addressed in the context of IOERT. IOERT is carried out by a multidisciplinary team in a highly complex environment that has special tracking needs owing to the characteristics of its working volume (i.e., large and prone to occlusions), in addition to the requisites of accuracy. The first part of this dissertation presents the validation of a commercial multicamera optical tracker in terms of accuracy, sensitivity to miscalibration, camera occlusions and detection of tools using a feasible surgical setup. It also proposes an automatic miscalibration detection protocol that satisfies the IOERT requirements of automaticity and speed. We show that the multicamera tracker is suitable for IOERT navigation and demonstrate the feasibility of the miscalibration detection protocol in clinical setups. Image-to-world registration is one of the main issues during image-guided applications where the field of interest and/or the number of possible anatomical localizations is large, such as IOERT. In the second part of this dissertation, a registration algorithm for image-guided surgery based on lineshaped fiducials (line-based registration) is proposed and validated. Line-based registration decreases acquisition time during surgery and enables better registration accuracy than other published algorithms. In the third part of this dissertation, we integrate a commercial low-cost ultrasound transducer and a cone beam CT C-arm with an optical tracker for image-guided interventions to enable surgical navigation and explore image based registration techniques for both modalities. In the fourth part of the dissertation, a navigation system based on optical tracking for the docking of the mobile linear accelerator to the radiation applicator is assessed. This system improves safety and reduces procedure time. The system tracks the prescribed collimator location to solve the movements that the linear accelerator should perform to reach the docking position and warns the user about potentially unachievable arrangements before the actual procedure. A software application was implemented to use this system in the OR, where it was also evaluated to assess the improvement in docking speed. Finally, in the last part of the dissertation, we present and assess the installation setup for a navigation system in a dedicated IOERT OR, determine the steps necessary for the IOERT process, identify workflow limitations and evaluate the feasibility of the integration of the system in a real OR. The navigation system safeguards the sterile conditions of the OR, clears the space available for surgeons and is suitable for any similar dedicated IOERT OR.La Radioterapia Intraoperatoria por electrones (RIO) consiste en la aplicación de radiación de alta energía directamente sobre el lecho tumoral, accesible durante la cirugía, evitando radiar los tejidos sanos. Hoy en día, avances como los sistemas de planificación (TPS) y la aparición de aceleradores lineales móviles permiten optimizar el procedimiento, minimizar el estrés clínico en el entorno quirúrgico y evitar el desplazamiento del paciente durante la cirugía a otra sala para ser radiado. La aplicación de la radiación se realiza mediante un colimador del haz de radiación (aplicador) que se coloca sobre el lecho tumoral de forma manual por el oncólogo radioterápico. Sin embargo, para asegurar una correcta deposición de la dosis prescrita y planificada en el TPS, es necesaria una adecuada validación de la colocación del colimador. En esta Tesis se abordan tres limitaciones conocidas del procedimiento RIO: el correcto posicionamiento del aplicador sobre el lecho tumoral, acoplamiento del acelerador lineal con el aplicador y validación de la dosis de radiación prescrita. Esta Tesis demuestra que estas limitaciones pueden ser abordadas mediante el posicionamiento del aplicador de radiación en relación con la anatomía del paciente. El objetivo principal de este trabajo es la evaluación de alternativas tecnológicas y procedimentales para la mejora de la práctica de la RIO y resolver los problemas de incertidumbre descritos anteriormente. Concretamente se revisan en el contexto de la radioterapia intraoperatoria los siguientes temas: el registro de la imagen y el paciente, sistemas de posicionamiento multicámara, técnicas de imagen multimodal y el acoplamiento del acelerador lineal móvil. El entorno complejo y multidisciplinar de la RIO precisa de necesidades especiales para el empleo de sistemas de posicionamiento como una alta precisión y un volumen de trabajo grande y propenso a las oclusiones de los sensores de posición. La primera parte de esta Tesis presenta una exhaustiva evaluación de un sistema de posicionamiento óptico multicámara comercial. Estudiamos la precisión del sistema, su sensibilidad a errores cometidos en la calibración, robustez frente a posibles oclusiones de las cámaras y precisión en el seguimiento de herramientas en un entorno quirúrgico real. Además, proponemos un protocolo para la detección automática de errores por calibración que satisface los requisitos de automaticidad y velocidad para la RIO demostrando la viabilidad del empleo de este sistema para la navegación en RIO. Uno de los problemas principales de la cirugía guiada por imagen es el correcto registro de la imagen médica y la anatomía del paciente en el quirófano. En el caso de la RIO, donde el número de posibles localizaciones anatómicas es bastante amplio, así como el campo de trabajo es grande se hace necesario abordar este problema para una correcta navegación. Por ello, en la segunda parte de esta Tesis, proponemos y validamos un nuevo algoritmo de registro (LBR) para la cirugía guiada por imagen basado en marcadores lineales. El método propuesto reduce el tiempo de la adquisición de la posición de los marcadores durante la cirugía y supera en precisión a otros algoritmos de registro establecidos y estudiados en la literatura. En la tercera parte de esta tesis, integramos un transductor de ultrasonido comercial de bajo coste, un arco en C de rayos X con haz cónico y un sistema de posicionamiento óptico para intervenciones guiadas por imagen que permite la navegación quirúrgica y exploramos técnicas de registro de imagen para ambas modalidades. En la cuarta parte de esta tesis se evalúa un navegador basado en el sistema de posicionamiento óptico para el acoplamiento del acelerador lineal móvil con aplicador de radiación, mejorando la seguridad y reduciendo el tiempo del propio acoplamiento. El sistema es capaz de localizar el colimador en el espacio y proporcionar los movimientos que el acelerador lineal debe realizar para alcanzar la posición de acoplamiento. El sistema propuesto es capaz de advertir al usuario de aquellos casos donde la posición de acoplamiento sea inalcanzable. El sistema propuesto de ayuda para el acoplamiento se integró en una aplicación software que fue evaluada para su uso final en quirófano demostrando su viabilidad y la reducción de tiempo de acoplamiento mediante su uso. Por último, presentamos y evaluamos la instalación de un sistema de navegación en un quirófano RIO dedicado, determinamos las necesidades desde el punto de vista procedimental, identificamos las limitaciones en el flujo de trabajo y evaluamos la viabilidad de la integración del sistema en un entorno quirúrgico real. El sistema propuesto demuestra ser apto para el entorno RIO manteniendo las condiciones de esterilidad y dejando despejado el campo quirúrgico además de ser adaptable a cualquier quirófano similar.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Raúl San José Estépar.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Carlos Ferrer Albiac

    Robotic System Development for Precision MRI-Guided Needle-Based Interventions

    Get PDF
    This dissertation describes the development of a methodology for implementing robotic systems for interventional procedures under intraoperative Magnetic Resonance Imaging (MRI) guidance. MRI is an ideal imaging modality for surgical guidance of diagnostic and therapeutic procedures, thanks to its ability to perform high resolution, real-time, and high soft tissue contrast imaging without ionizing radiation. However, the strong magnetic field and sensitivity to radio frequency signals, as well as tightly confined scanner bore render great challenges to developing robotic systems within MRI environment. Discussed are potential solutions to address engineering topics related to development of MRI-compatible electro-mechanical systems and modeling of steerable needle interventions. A robotic framework is developed based on a modular design approach, supporting varying MRI-guided interventional procedures, with stereotactic neurosurgery and prostate cancer therapy as two driving exemplary applications. A piezoelectrically actuated electro-mechanical system is designed to provide precise needle placement in the bore of the scanner under interactive MRI-guidance, while overcoming the challenges inherent to MRI-guided procedures. This work presents the development of the robotic system in the aspects of requirements definition, clinical work flow development, mechanism optimization, control system design and experimental evaluation. A steerable needle is beneficial for interventional procedures with its capability to produce curved path, avoiding anatomical obstacles or compensating for needle placement errors. Two kinds of steerable needles are discussed, i.e. asymmetric-tip needle and concentric-tube cannula. A novel Gaussian-based ContinUous Rotation and Variable-curvature (CURV) model is proposed to steer asymmetric-tip needle, which enables variable curvature of the needle trajectory with independent control of needle rotation and insertion. While concentric-tube cannula is suitable for clinical applications where a curved trajectory is needed without relying on tissue interaction force. This dissertation addresses fundamental challenges in developing and deploying MRI-compatible robotic systems, and enables the technologies for MRI-guided needle-based interventions. This study applied and evaluated these techniques to a system for prostate biopsy that is currently in clinical trials, developed a neurosurgery robot prototype for interstitial thermal therapy of brain cancer under MRI guidance, and demonstrated needle steering using both asymmetric tip and pre-bent concentric-tube cannula approaches on a testbed

    Evaluation of optical tracking and augmented reality for needle navigation in sacral nerve stimulation

    Get PDF
    Background and objective: Sacral nerve stimulation (SNS) is a minimally invasive procedure where an electrode lead is implanted through the sacral foramina to stimulate the nerve modulating colonic and urinary functions. One of the most crucial steps in SNS procedures is the placement of the tined lead close to the sacral nerve. However, needle insertion is very challenging for surgeons. Several x-ray projections are required to interpret the needle position correctly. In many cases, multiple punctures are needed, causing an increase in surgical time and patient's discomfort and pain. In this work we propose and evaluate two different navigation systems to guide electrode placement in SNS surgeries designed to reduce surgical time, minimize patient discomfort and improve surgical outcomes. Methods: We developed, for the first alternative, an open-source navigation software to guide electrode placement by real-time needle tracking with an optical tracking system (OTS). In the second method, we present a smartphone-based AR application that displays virtual guidance elements directly on the affected area, using a 3D printed reference marker placed on the patient. This guidance facilitates needle insertion with a predefined trajectory. Both techniques were evaluated to determine which one obtained better results than the current surgical procedure. To compare the proposals with the clinical method, we developed an x-ray software tool that calculates a digitally reconstructed radiograph, simulating the fluoroscopy acquisitions during the procedure. Twelve physicians (inexperienced and experienced users) performed needle insertions through several specific targets to evaluate the alternative SNS guidance methods on a realistic patient-based phantom. Results: With each navigation solution, we observed that users took less average time to complete each insertion (36.83 s and 44.43 s for the OTS and AR methods, respectively) and needed fewer average punctures to reach the target (1.23 and 1.96 for the OTS and AR methods respectively) than following the standard clinical method (189.28 s and 3.65 punctures). Conclusions: To conclude, we have shown two navigation alternatives that could improve surgical outcome by significantly reducing needle insertions, surgical time and patient's pain in SNS procedures. We believe that these solutions are feasible to train surgeons and even replace current SNS clinical procedures.Research supported by projects PI18/01625 and AC20/00102 (Ministerio de Ciencia, Innovación y Universidades, Instituto de Salud Carlos III, Asociación Española Contra el Cáncer and European Regional Development Fund "Una manera de hacer Europa"), IND2018/TIC-9753 (Comunidad de Madrid) and project PerPlanRT (ERA Permed). Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2022)

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance

    Get PDF
    Image guided therapy procedures under MRI guidance has been a focused research area over past decade. Also, over the last decade, various MRI guided robotic devices have been developed and used clinically for percutaneous interventions, such as prostate biopsy, brachytherapy, and tissue ablation. Though MRI provides better soft tissue contrast compared to Computed Tomography and Ultrasound, it poses various challenges like constrained space, less ergonomic patient access and limited material choices due to its high magnetic field. Even after, advancements in MRI compatible actuation methods and robotic devices using them, most MRI guided interventions are still open-loop in nature and relies on preoperative or intraoperative images. In this thesis, an intraoperative MRI guided robotic system for prostate biopsy comprising of an MRI compatible 4-DOF robotic manipulator, robot controller and control application with Clinical User Interface (CUI) and surgical planning applications (3DSlicer and RadVision) is presented. This system utilizes intraoperative images acquired after each full or partial needle insertion for needle tip localization. Presented system was approved by Institutional Review Board at Brigham and Women\u27s Hospital(BWH) and has been used in 30 patient trials. Successful translation of such a system utilizing intraoperative MR images motivated towards the development of a system architecture for close-loop, real-time MRI guided percutaneous interventions. Robot assisted, close-loop intervention could help in accurate positioning and localization of the therapy delivery instrument, improve physician and patient comfort and allow real-time therapy monitoring. Also, utilizing real-time MR images could allow correction of surgical instrument trajectory and controlled therapy delivery. Two of the applications validating the presented architecture; closed-loop needle steering and MRI guided brain tumor ablation are demonstrated under real-time MRI guidance

    Evaluating Human Performance for Image-Guided Surgical Tasks

    Get PDF
    The following work focuses on the objective evaluation of human performance for two different interventional tasks; targeted prostate biopsy tasks using a tracked biopsy device, and external ventricular drain placement tasks using a mobile-based augmented reality device for visualization and guidance. In both tasks, a human performance methodology was utilized which respects the trade-off between speed and accuracy for users conducting a series of targeting tasks using each device. This work outlines the development and application of performance evaluation methods using these devices, as well as details regarding the implementation of the mobile AR application. It was determined that the Fitts’ Law methodology can be applied for evaluation of tasks performed in each surgical scenario, and was sensitive to differentiate performance across a range which spanned experienced and novice users. This methodology is valuable for future development of training modules for these and other medical devices, and can provide details about the underlying characteristics of the devices, and how they can be optimized with respect to human performance
    corecore