442 research outputs found

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts

    Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies

    Full text link
    first_page loading... settings Open AccessArticle Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies—A Feasibility Study on Cadavers by Joëlle Ackermann 1,2,† [ORCID] , Florentin Liebmann 1,2,*,† [ORCID] , Armando Hoch 3 [ORCID] , Jess G. Snedeker 2,3, Mazda Farshad 3, Stefan Rahm 3, Patrick O. Zingg 3 and Philipp Fürnstahl 1 1 Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland 2 Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland 3 Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland * Author to whom correspondence should be addressed. † These authors contributed equally to this work. Academic Editor: Jiro Tanaka Appl. Sci. 2021, 11(3), 1228; https://doi.org/10.3390/app11031228 Received: 20 December 2020 / Revised: 13 January 2021 / Accepted: 25 January 2021 / Published: 29 January 2021 (This article belongs to the Special Issue Artificial Intelligence (AI) and Virtual Reality (VR) in Biomechanics) Download PDF Browse Figures Citation Export Abstract Augmented reality (AR)-based surgical navigation may offer new possibilities for safe and accurate surgical execution of complex osteotomies. In this study we investigated the feasibility of navigating the periacetabular osteotomy of Ganz (PAO), known as one of the most complex orthopedic interventions, on two cadaveric pelves under realistic operating room conditions. Preoperative planning was conducted on computed tomography (CT)-reconstructed 3D models using an in-house developed software, which allowed creating cutting plane objects for planning of the osteotomies and reorientation of the acetabular fragment. An AR application was developed comprising point-based registration, motion compensation and guidance for osteotomies as well as fragment reorientation. Navigation accuracy was evaluated on CT-reconstructed 3D models, resulting in an error of 10.8 mm for osteotomy starting points and 5.4° for osteotomy directions. The reorientation errors were 6.7°, 7.0° and 0.9° for the x-, y- and z-axis, respectively. Average postoperative error of LCE angle was 4.5°. Our study demonstrated that the AR-based execution of complex osteotomies is feasible. Fragment realignment navigation needs further improvement, although it is more accurate than the state of the art in PAO surgery

    Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning

    Full text link
    Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free "technician-in-the-loop" Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required per desired view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future

    Optimization of craniosynostosis surgery: virtual planning, intraoperative 3D photography and surgical navigation

    Get PDF
    Mención Internacional en el título de doctorCraniosynostosis is a congenital defect defined as the premature fusion of one or more cranial sutures. This fusion leads to growth restriction and deformation of the cranium, caused by compensatory expansion parallel to the fused sutures. Surgical correction is the preferred treatment in most cases to excise the fused sutures and to normalize cranial shape. Although multiple technological advancements have arisen in the surgical management of craniosynostosis, interventional planning and surgical correction are still highly dependent on the subjective assessment and artistic judgment of craniofacial surgeons. Therefore, there is a high variability in individual surgeon performance and, thus, in the surgical outcomes. The main objective of this thesis was to explore different approaches to improve the surgical management of craniosynostosis by reducing subjectivity in all stages of the process, from the preoperative virtual planning phase to the intraoperative performance. First, we developed a novel framework for automatic planning of craniosynostosis surgery that enables: calculating a patient-specific normative reference shape to target, estimating optimal bone fragments for remodeling, and computing the most appropriate configuration of fragments in order to achieve the desired target cranial shape. Our results showed that automatic plans were accurate and achieved adequate overcorrection with respect to normative morphology. Surgeons’ feedback indicated that the integration of this technology could increase the accuracy and reduce the duration of the preoperative planning phase. Second, we validated the use of hand-held 3D photography for intraoperative evaluation of the surgical outcome. The accuracy of this technology for 3D modeling and morphology quantification was evaluated using computed tomography imaging as gold-standard. Our results demonstrated that 3D photography could be used to perform accurate 3D reconstructions of the anatomy during surgical interventions and to measure morphological metrics to provide feedback to the surgical team. This technology presents a valuable alternative to computed tomography imaging and can be easily integrated into the current surgical workflow to assist during the intervention. Also, we developed an intraoperative navigation system to provide real-time guidance during craniosynostosis surgeries. This system, based on optical tracking, enables to record the positions of remodeled bone fragments and compare them with the target virtual surgical plan. Our navigation system is based on patient-specific surgical guides, which fit into the patient’s anatomy, to perform patient-to-image registration. In addition, our workflow does not rely on patient’s head immobilization or invasive attachment of dynamic reference frames. After testing our system in five craniosynostosis surgeries, our results demonstrated a high navigation accuracy and optimal surgical outcomes in all cases. Furthermore, the use of navigation did not substantially increase the operative time. Finally, we investigated the use of augmented reality technology as an alternative to navigation for surgical guidance in craniosynostosis surgery. We developed an augmented reality application to visualize the virtual surgical plan overlaid on the surgical field, indicating the predefined osteotomy locations and target bone fragment positions. Our results demonstrated that augmented reality provides sub-millimetric accuracy when guiding both osteotomy and remodeling phases during open cranial vault remodeling. Surgeons’ feedback indicated that this technology could be integrated into the current surgical workflow for the treatment of craniosynostosis. To conclude, in this thesis we evaluated multiple technological advancements to improve the surgical management of craniosynostosis. The integration of these developments into the surgical workflow of craniosynostosis will positively impact the surgical outcomes, increase the efficiency of surgical interventions, and reduce the variability between surgeons and institutions.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidente: Norberto Antonio Malpica González.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Tamas Ung

    Advanced tracking and image registration techniques for intraoperative radiation therapy

    Get PDF
    Mención Internacional en el título de doctorIntraoperative electron radiation therapy (IOERT) is a technique used to deliver radiation to the surgically opened tumor bed without irradiating healthy tissue. Treatment planning systems and mobile linear accelerators enable clinicians to optimize the procedure, minimize stress in the operating room (OR) and avoid transferring the patient to a dedicated radiation room. However, placement of the radiation collimator over the tumor bed requires a validation methodology to ensure correct delivery of the dose prescribed in the treatment planning system. In this dissertation, we address three well-known limitations of IOERT: applicator positioning over the tumor bed, docking of the mobile linear accelerator gantry with the applicator and validation of the dose delivery prescribed. This thesis demonstrates that these limitations can be overcome by positioning the applicator appropriately with respect to the patient’s anatomy. The main objective of the study was to assess technological and procedural alternatives for improvement of IOERT performance and resolution of problems of uncertainty. Image-to-world registration, multicamera optical trackers, multimodal imaging techniques and mobile linear accelerator docking are addressed in the context of IOERT. IOERT is carried out by a multidisciplinary team in a highly complex environment that has special tracking needs owing to the characteristics of its working volume (i.e., large and prone to occlusions), in addition to the requisites of accuracy. The first part of this dissertation presents the validation of a commercial multicamera optical tracker in terms of accuracy, sensitivity to miscalibration, camera occlusions and detection of tools using a feasible surgical setup. It also proposes an automatic miscalibration detection protocol that satisfies the IOERT requirements of automaticity and speed. We show that the multicamera tracker is suitable for IOERT navigation and demonstrate the feasibility of the miscalibration detection protocol in clinical setups. Image-to-world registration is one of the main issues during image-guided applications where the field of interest and/or the number of possible anatomical localizations is large, such as IOERT. In the second part of this dissertation, a registration algorithm for image-guided surgery based on lineshaped fiducials (line-based registration) is proposed and validated. Line-based registration decreases acquisition time during surgery and enables better registration accuracy than other published algorithms. In the third part of this dissertation, we integrate a commercial low-cost ultrasound transducer and a cone beam CT C-arm with an optical tracker for image-guided interventions to enable surgical navigation and explore image based registration techniques for both modalities. In the fourth part of the dissertation, a navigation system based on optical tracking for the docking of the mobile linear accelerator to the radiation applicator is assessed. This system improves safety and reduces procedure time. The system tracks the prescribed collimator location to solve the movements that the linear accelerator should perform to reach the docking position and warns the user about potentially unachievable arrangements before the actual procedure. A software application was implemented to use this system in the OR, where it was also evaluated to assess the improvement in docking speed. Finally, in the last part of the dissertation, we present and assess the installation setup for a navigation system in a dedicated IOERT OR, determine the steps necessary for the IOERT process, identify workflow limitations and evaluate the feasibility of the integration of the system in a real OR. The navigation system safeguards the sterile conditions of the OR, clears the space available for surgeons and is suitable for any similar dedicated IOERT OR.La Radioterapia Intraoperatoria por electrones (RIO) consiste en la aplicación de radiación de alta energía directamente sobre el lecho tumoral, accesible durante la cirugía, evitando radiar los tejidos sanos. Hoy en día, avances como los sistemas de planificación (TPS) y la aparición de aceleradores lineales móviles permiten optimizar el procedimiento, minimizar el estrés clínico en el entorno quirúrgico y evitar el desplazamiento del paciente durante la cirugía a otra sala para ser radiado. La aplicación de la radiación se realiza mediante un colimador del haz de radiación (aplicador) que se coloca sobre el lecho tumoral de forma manual por el oncólogo radioterápico. Sin embargo, para asegurar una correcta deposición de la dosis prescrita y planificada en el TPS, es necesaria una adecuada validación de la colocación del colimador. En esta Tesis se abordan tres limitaciones conocidas del procedimiento RIO: el correcto posicionamiento del aplicador sobre el lecho tumoral, acoplamiento del acelerador lineal con el aplicador y validación de la dosis de radiación prescrita. Esta Tesis demuestra que estas limitaciones pueden ser abordadas mediante el posicionamiento del aplicador de radiación en relación con la anatomía del paciente. El objetivo principal de este trabajo es la evaluación de alternativas tecnológicas y procedimentales para la mejora de la práctica de la RIO y resolver los problemas de incertidumbre descritos anteriormente. Concretamente se revisan en el contexto de la radioterapia intraoperatoria los siguientes temas: el registro de la imagen y el paciente, sistemas de posicionamiento multicámara, técnicas de imagen multimodal y el acoplamiento del acelerador lineal móvil. El entorno complejo y multidisciplinar de la RIO precisa de necesidades especiales para el empleo de sistemas de posicionamiento como una alta precisión y un volumen de trabajo grande y propenso a las oclusiones de los sensores de posición. La primera parte de esta Tesis presenta una exhaustiva evaluación de un sistema de posicionamiento óptico multicámara comercial. Estudiamos la precisión del sistema, su sensibilidad a errores cometidos en la calibración, robustez frente a posibles oclusiones de las cámaras y precisión en el seguimiento de herramientas en un entorno quirúrgico real. Además, proponemos un protocolo para la detección automática de errores por calibración que satisface los requisitos de automaticidad y velocidad para la RIO demostrando la viabilidad del empleo de este sistema para la navegación en RIO. Uno de los problemas principales de la cirugía guiada por imagen es el correcto registro de la imagen médica y la anatomía del paciente en el quirófano. En el caso de la RIO, donde el número de posibles localizaciones anatómicas es bastante amplio, así como el campo de trabajo es grande se hace necesario abordar este problema para una correcta navegación. Por ello, en la segunda parte de esta Tesis, proponemos y validamos un nuevo algoritmo de registro (LBR) para la cirugía guiada por imagen basado en marcadores lineales. El método propuesto reduce el tiempo de la adquisición de la posición de los marcadores durante la cirugía y supera en precisión a otros algoritmos de registro establecidos y estudiados en la literatura. En la tercera parte de esta tesis, integramos un transductor de ultrasonido comercial de bajo coste, un arco en C de rayos X con haz cónico y un sistema de posicionamiento óptico para intervenciones guiadas por imagen que permite la navegación quirúrgica y exploramos técnicas de registro de imagen para ambas modalidades. En la cuarta parte de esta tesis se evalúa un navegador basado en el sistema de posicionamiento óptico para el acoplamiento del acelerador lineal móvil con aplicador de radiación, mejorando la seguridad y reduciendo el tiempo del propio acoplamiento. El sistema es capaz de localizar el colimador en el espacio y proporcionar los movimientos que el acelerador lineal debe realizar para alcanzar la posición de acoplamiento. El sistema propuesto es capaz de advertir al usuario de aquellos casos donde la posición de acoplamiento sea inalcanzable. El sistema propuesto de ayuda para el acoplamiento se integró en una aplicación software que fue evaluada para su uso final en quirófano demostrando su viabilidad y la reducción de tiempo de acoplamiento mediante su uso. Por último, presentamos y evaluamos la instalación de un sistema de navegación en un quirófano RIO dedicado, determinamos las necesidades desde el punto de vista procedimental, identificamos las limitaciones en el flujo de trabajo y evaluamos la viabilidad de la integración del sistema en un entorno quirúrgico real. El sistema propuesto demuestra ser apto para el entorno RIO manteniendo las condiciones de esterilidad y dejando despejado el campo quirúrgico además de ser adaptable a cualquier quirófano similar.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Raúl San José Estépar.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Carlos Ferrer Albiac

    Fusion and visualization of intraoperative cortical images with preoperative models for epilepsy surgical planning and guidance.

    Get PDF
    OBJECTIVE: During epilepsy surgery it is important for the surgeon to correlate the preoperative cortical morphology (from preoperative images) with the intraoperative environment. Augmented Reality (AR) provides a solution for combining the real environment with virtual models. However, AR usually requires the use of specialized displays, and its effectiveness in the surgery still needs to be evaluated. The objective of this research was to develop an alternative approach to provide enhanced visualization by fusing a direct (photographic) view of the surgical field with the 3D patient model during image guided epilepsy surgery. MATERIALS AND METHODS: We correlated the preoperative plan with the intraoperative surgical scene, first by a manual landmark-based registration and then by an intensity-based perspective 3D-2D registration for camera pose estimation. The 2D photographic image was then texture-mapped onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. The algorithm was validated by a phantom study and also in the clinical environment with a neuronavigation system. RESULTS: In the phantom experiment, the 3D Mean Registration Error (MRE) was 2.43 ± 0.32 mm with a success rate of 100%. In the clinical experiment, the 3D MRE was 5.15 ± 0.49 mm with 2D in-plane error of 3.30 ± 1.41 mm. A clinical application of our fusion method for enhanced and augmented visualization for integrated image and functional guidance during neurosurgery is also presented. CONCLUSIONS: This paper presents an alternative approach to a sophisticated AR environment for assisting in epilepsy surgery, whereby a real intraoperative scene is mapped onto the surface model of the brain. In contrast to the AR approach, this method needs no specialized display equipment. Moreover, it requires minimal changes to existing systems and workflow, and is therefore well suited to the OR environment. In the phantom and in vivo clinical experiments, we demonstrate that the fusion method can achieve a level of accuracy sufficient for the requirements of epilepsy surgery

    Robotic System Development for Precision MRI-Guided Needle-Based Interventions

    Get PDF
    This dissertation describes the development of a methodology for implementing robotic systems for interventional procedures under intraoperative Magnetic Resonance Imaging (MRI) guidance. MRI is an ideal imaging modality for surgical guidance of diagnostic and therapeutic procedures, thanks to its ability to perform high resolution, real-time, and high soft tissue contrast imaging without ionizing radiation. However, the strong magnetic field and sensitivity to radio frequency signals, as well as tightly confined scanner bore render great challenges to developing robotic systems within MRI environment. Discussed are potential solutions to address engineering topics related to development of MRI-compatible electro-mechanical systems and modeling of steerable needle interventions. A robotic framework is developed based on a modular design approach, supporting varying MRI-guided interventional procedures, with stereotactic neurosurgery and prostate cancer therapy as two driving exemplary applications. A piezoelectrically actuated electro-mechanical system is designed to provide precise needle placement in the bore of the scanner under interactive MRI-guidance, while overcoming the challenges inherent to MRI-guided procedures. This work presents the development of the robotic system in the aspects of requirements definition, clinical work flow development, mechanism optimization, control system design and experimental evaluation. A steerable needle is beneficial for interventional procedures with its capability to produce curved path, avoiding anatomical obstacles or compensating for needle placement errors. Two kinds of steerable needles are discussed, i.e. asymmetric-tip needle and concentric-tube cannula. A novel Gaussian-based ContinUous Rotation and Variable-curvature (CURV) model is proposed to steer asymmetric-tip needle, which enables variable curvature of the needle trajectory with independent control of needle rotation and insertion. While concentric-tube cannula is suitable for clinical applications where a curved trajectory is needed without relying on tissue interaction force. This dissertation addresses fundamental challenges in developing and deploying MRI-compatible robotic systems, and enables the technologies for MRI-guided needle-based interventions. This study applied and evaluated these techniques to a system for prostate biopsy that is currently in clinical trials, developed a neurosurgery robot prototype for interstitial thermal therapy of brain cancer under MRI guidance, and demonstrated needle steering using both asymmetric tip and pre-bent concentric-tube cannula approaches on a testbed
    corecore