227 research outputs found

    Review of robotic technology for keyhole transcranial stereotactic neurosurgery

    Get PDF
    The research of stereotactic apparatus to guide surgical devices began in 1908, yet a major part of today's stereotactic neurosurgeries still rely on stereotactic frames developed almost half a century ago. Robots excel at handling spatial information, and are, thus, obvious candidates in the guidance of instrumentation along precisely planned trajectories. In this review, we introduce the concept of stereotaxy and describe a standard stereotactic neurosurgery. Neurosurgeons' expectations and demands regarding the role of robots as assistive tools are also addressed. We list the most successful robotic systems developed specifically for or capable of executing stereotactic neurosurgery. A critical review is presented for each robotic system, emphasizing the differences between them and detailing positive features and drawbacks. An analysis of the listed robotic system features is also undertaken, in the context of robotic application in stereotactic neurosurgery. Finally, we discuss the current perspective, and future directions of a robotic technology in this field. All robotic systems follow a very similar and structured workflow despite the technical differences that set them apart. No system unequivocally stands out as an absolute best. The trend of technological progress is pointing toward the development of miniaturized cost-effective solutions with more intuitive interfaces.This work has been partially financed by the NETT Project (FP7-PEOPLE-2011-ITN-289146), ACTIVE Project (FP7-ICT-2009-6-270460), and FCT PhD grant (ref. SFRH/BD/86499/2012)

    Advanced tracking and image registration techniques for intraoperative radiation therapy

    Get PDF
    Mención Internacional en el título de doctorIntraoperative electron radiation therapy (IOERT) is a technique used to deliver radiation to the surgically opened tumor bed without irradiating healthy tissue. Treatment planning systems and mobile linear accelerators enable clinicians to optimize the procedure, minimize stress in the operating room (OR) and avoid transferring the patient to a dedicated radiation room. However, placement of the radiation collimator over the tumor bed requires a validation methodology to ensure correct delivery of the dose prescribed in the treatment planning system. In this dissertation, we address three well-known limitations of IOERT: applicator positioning over the tumor bed, docking of the mobile linear accelerator gantry with the applicator and validation of the dose delivery prescribed. This thesis demonstrates that these limitations can be overcome by positioning the applicator appropriately with respect to the patient’s anatomy. The main objective of the study was to assess technological and procedural alternatives for improvement of IOERT performance and resolution of problems of uncertainty. Image-to-world registration, multicamera optical trackers, multimodal imaging techniques and mobile linear accelerator docking are addressed in the context of IOERT. IOERT is carried out by a multidisciplinary team in a highly complex environment that has special tracking needs owing to the characteristics of its working volume (i.e., large and prone to occlusions), in addition to the requisites of accuracy. The first part of this dissertation presents the validation of a commercial multicamera optical tracker in terms of accuracy, sensitivity to miscalibration, camera occlusions and detection of tools using a feasible surgical setup. It also proposes an automatic miscalibration detection protocol that satisfies the IOERT requirements of automaticity and speed. We show that the multicamera tracker is suitable for IOERT navigation and demonstrate the feasibility of the miscalibration detection protocol in clinical setups. Image-to-world registration is one of the main issues during image-guided applications where the field of interest and/or the number of possible anatomical localizations is large, such as IOERT. In the second part of this dissertation, a registration algorithm for image-guided surgery based on lineshaped fiducials (line-based registration) is proposed and validated. Line-based registration decreases acquisition time during surgery and enables better registration accuracy than other published algorithms. In the third part of this dissertation, we integrate a commercial low-cost ultrasound transducer and a cone beam CT C-arm with an optical tracker for image-guided interventions to enable surgical navigation and explore image based registration techniques for both modalities. In the fourth part of the dissertation, a navigation system based on optical tracking for the docking of the mobile linear accelerator to the radiation applicator is assessed. This system improves safety and reduces procedure time. The system tracks the prescribed collimator location to solve the movements that the linear accelerator should perform to reach the docking position and warns the user about potentially unachievable arrangements before the actual procedure. A software application was implemented to use this system in the OR, where it was also evaluated to assess the improvement in docking speed. Finally, in the last part of the dissertation, we present and assess the installation setup for a navigation system in a dedicated IOERT OR, determine the steps necessary for the IOERT process, identify workflow limitations and evaluate the feasibility of the integration of the system in a real OR. The navigation system safeguards the sterile conditions of the OR, clears the space available for surgeons and is suitable for any similar dedicated IOERT OR.La Radioterapia Intraoperatoria por electrones (RIO) consiste en la aplicación de radiación de alta energía directamente sobre el lecho tumoral, accesible durante la cirugía, evitando radiar los tejidos sanos. Hoy en día, avances como los sistemas de planificación (TPS) y la aparición de aceleradores lineales móviles permiten optimizar el procedimiento, minimizar el estrés clínico en el entorno quirúrgico y evitar el desplazamiento del paciente durante la cirugía a otra sala para ser radiado. La aplicación de la radiación se realiza mediante un colimador del haz de radiación (aplicador) que se coloca sobre el lecho tumoral de forma manual por el oncólogo radioterápico. Sin embargo, para asegurar una correcta deposición de la dosis prescrita y planificada en el TPS, es necesaria una adecuada validación de la colocación del colimador. En esta Tesis se abordan tres limitaciones conocidas del procedimiento RIO: el correcto posicionamiento del aplicador sobre el lecho tumoral, acoplamiento del acelerador lineal con el aplicador y validación de la dosis de radiación prescrita. Esta Tesis demuestra que estas limitaciones pueden ser abordadas mediante el posicionamiento del aplicador de radiación en relación con la anatomía del paciente. El objetivo principal de este trabajo es la evaluación de alternativas tecnológicas y procedimentales para la mejora de la práctica de la RIO y resolver los problemas de incertidumbre descritos anteriormente. Concretamente se revisan en el contexto de la radioterapia intraoperatoria los siguientes temas: el registro de la imagen y el paciente, sistemas de posicionamiento multicámara, técnicas de imagen multimodal y el acoplamiento del acelerador lineal móvil. El entorno complejo y multidisciplinar de la RIO precisa de necesidades especiales para el empleo de sistemas de posicionamiento como una alta precisión y un volumen de trabajo grande y propenso a las oclusiones de los sensores de posición. La primera parte de esta Tesis presenta una exhaustiva evaluación de un sistema de posicionamiento óptico multicámara comercial. Estudiamos la precisión del sistema, su sensibilidad a errores cometidos en la calibración, robustez frente a posibles oclusiones de las cámaras y precisión en el seguimiento de herramientas en un entorno quirúrgico real. Además, proponemos un protocolo para la detección automática de errores por calibración que satisface los requisitos de automaticidad y velocidad para la RIO demostrando la viabilidad del empleo de este sistema para la navegación en RIO. Uno de los problemas principales de la cirugía guiada por imagen es el correcto registro de la imagen médica y la anatomía del paciente en el quirófano. En el caso de la RIO, donde el número de posibles localizaciones anatómicas es bastante amplio, así como el campo de trabajo es grande se hace necesario abordar este problema para una correcta navegación. Por ello, en la segunda parte de esta Tesis, proponemos y validamos un nuevo algoritmo de registro (LBR) para la cirugía guiada por imagen basado en marcadores lineales. El método propuesto reduce el tiempo de la adquisición de la posición de los marcadores durante la cirugía y supera en precisión a otros algoritmos de registro establecidos y estudiados en la literatura. En la tercera parte de esta tesis, integramos un transductor de ultrasonido comercial de bajo coste, un arco en C de rayos X con haz cónico y un sistema de posicionamiento óptico para intervenciones guiadas por imagen que permite la navegación quirúrgica y exploramos técnicas de registro de imagen para ambas modalidades. En la cuarta parte de esta tesis se evalúa un navegador basado en el sistema de posicionamiento óptico para el acoplamiento del acelerador lineal móvil con aplicador de radiación, mejorando la seguridad y reduciendo el tiempo del propio acoplamiento. El sistema es capaz de localizar el colimador en el espacio y proporcionar los movimientos que el acelerador lineal debe realizar para alcanzar la posición de acoplamiento. El sistema propuesto es capaz de advertir al usuario de aquellos casos donde la posición de acoplamiento sea inalcanzable. El sistema propuesto de ayuda para el acoplamiento se integró en una aplicación software que fue evaluada para su uso final en quirófano demostrando su viabilidad y la reducción de tiempo de acoplamiento mediante su uso. Por último, presentamos y evaluamos la instalación de un sistema de navegación en un quirófano RIO dedicado, determinamos las necesidades desde el punto de vista procedimental, identificamos las limitaciones en el flujo de trabajo y evaluamos la viabilidad de la integración del sistema en un entorno quirúrgico real. El sistema propuesto demuestra ser apto para el entorno RIO manteniendo las condiciones de esterilidad y dejando despejado el campo quirúrgico además de ser adaptable a cualquier quirófano similar.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Raúl San José Estépar.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Carlos Ferrer Albiac

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    Augmented navigation

    Get PDF
    Spinal fixation procedures have the inherent risk of causing damage to vulnerable anatomical structures such as the spinal cord, nerve roots, and blood vessels. To prevent complications, several technological aids have been introduced. Surgical navigation is the most widely used, and guides the surgeon by providing the position of the surgical instruments and implants in relation to the patient anatomy based on radiographic images. Navigation can be extended by the addition of a robotic arm to replace the surgeon’s hand to increase accuracy. Another line of surgical aids is tissue sensing equipment, that recognizes different tissue types and provides a warning system built into surgical instruments. All these technologies are under continuous development and the optimal solution is yet to be found. The aim of this thesis was to study the use of Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), and tissue sensing technology in spinal navigation to improve precision and prevent surgical errors. The aim of Paper I was to develop and validate an algorithm for automatizing the intraoperative planning of pedicle screws. An AI algorithm for automatic segmentation of the spine, and screw path suggestion was developed and evaluated. In a clinical study of advanced deformity cases, the algorithm could provide correct suggestions for 86% of all pedicles—or 95%, when cases with extremely altered anatomy were excluded. Paper II evaluated the accuracy of pedicle screw placement using a novel augmented reality surgical navigation (ARSN) system, harboring the above-developed algorithm. Twenty consecutively enrolled patients, eligible for deformity correction surgery in the thoracolumbar region, were operated on using the ARSN system. In this cohort, we found a pedicle screw placement accuracy of 94%, as measured according to the Gertzbein grading scale. The primary goal of Paper III was to validate an extension of the ARSN system for placing pedicle screws using instrument tracking and VR. In a porcine cadaver model, it was demonstrated that VR instrument tracking could successfully be integrated with the ARSN system, resulting in pedicle devices placed within 1.7 ± 1.0 mm of the planed path. Paper IV examined the feasibility of a robot-guided system for semi-automated, minimally invasive, pedicle screw placement in a cadaveric model. Using the robotic arm, pedicle devices were placed within 0.94 ± 0.59 mm of the planned path. The use of a semi-automated surgical robot was feasible, providing a higher technical accuracy compared to non-robotic solutions. Paper V investigated the use of a tissue sensing technology, diffuse reflectance spectroscopy (DRS), for detecting the cortical bone boundary in vertebrae during pedicle screw insertions. The technology could accurately differentiate between cancellous and cortical bone and warn the surgeon before a cortical breach. Using machine learning models, the technology demonstrated a sensitivity of 98% [range: 94-100%] and a specificity of 98% [range: 91-100%]. In conclusion, several technological aids can be used to improve accuracy during spinal fixation procedures. In this thesis, the advantages of adding AR, VR, AI and tissue sensing technology to conventional navigation solutions were studied

    Development of a Surgical Assistance System for Guiding Transcatheter Aortic Valve Implantation

    Get PDF
    Development of image-guided interventional systems is growing up rapidly in the recent years. These new systems become an essential part of the modern minimally invasive surgical procedures, especially for the cardiac surgery. Transcatheter aortic valve implantation (TAVI) is a recently developed surgical technique to treat severe aortic valve stenosis in elderly and high-risk patients. The placement of stented aortic valve prosthesis is crucial and typically performed under live 2D fluoroscopy guidance. To assist the placement of the prosthesis during the surgical procedure, a new fluoroscopy-based TAVI assistance system has been developed. The developed assistance system integrates a 3D geometrical aortic mesh model and anatomical valve landmarks with live 2D fluoroscopic images. The 3D aortic mesh model and landmarks are reconstructed from interventional angiographic and fluoroscopic C-arm CT system, and a target area of valve implantation is automatically estimated using these aortic mesh models. Based on template-based tracking approach, the overlay of visualized 3D aortic mesh model, landmarks and target area of implantation onto fluoroscopic images is updated by approximating the aortic root motion from a pigtail catheter motion without contrast agent. A rigid intensity-based registration method is also used to track continuously the aortic root motion in the presence of contrast agent. Moreover, the aortic valve prosthesis is tracked in fluoroscopic images to guide the surgeon to perform the appropriate placement of prosthesis into the estimated target area of implantation. An interactive graphical user interface for the surgeon is developed to initialize the system algorithms, control the visualization view of the guidance results, and correct manually overlay errors if needed. Retrospective experiments were carried out on several patient datasets from the clinical routine of the TAVI in a hybrid operating room. The maximum displacement errors were small for both the dynamic overlay of aortic mesh models and tracking the prosthesis, and within the clinically accepted ranges. High success rates of the developed assistance system were obtained for all tested patient datasets. The results show that the developed surgical assistance system provides a helpful tool for the surgeon by automatically defining the desired placement position of the prosthesis during the surgical procedure of the TAVI.Die Entwicklung bildgeführter interventioneller Systeme wächst rasant in den letzten Jahren. Diese neuen Systeme werden zunehmend ein wesentlicher Bestandteil der technischen Ausstattung bei modernen minimal-invasiven chirurgischen Eingriffen. Diese Entwicklung gilt besonders für die Herzchirurgie. Transkatheter Aortenklappen-Implantation (TAKI) ist eine neue entwickelte Operationstechnik zur Behandlung der schweren Aortenklappen-Stenose bei alten und Hochrisiko-Patienten. Die Platzierung der Aortenklappenprothese ist entscheidend und wird in der Regel unter live-2D-fluoroskopischen Bildgebung durchgeführt. Zur Unterstützung der Platzierung der Prothese während des chirurgischen Eingriffs wurde in dieser Arbeit ein neues Fluoroskopie-basiertes TAKI Assistenzsystem entwickelt. Das entwickelte Assistenzsystem überlagert eine 3D-Geometrie des Aorten-Netzmodells und anatomischen Landmarken auf live-2D-fluoroskopische Bilder. Das 3D-Aorten-Netzmodell und die Landmarken werden auf Basis der interventionellen Angiographie und Fluoroskopie mittels eines C-Arm-CT-Systems rekonstruiert. Unter Verwendung dieser Aorten-Netzmodelle wird das Zielgebiet der Klappen-Implantation automatisch geschätzt. Mit Hilfe eines auf Template Matching basierenden Tracking-Ansatzes wird die Überlagerung des visualisierten 3D-Aorten-Netzmodells, der berechneten Landmarken und der Zielbereich der Implantation auf fluoroskopischen Bildern korrekt überlagert. Eine kompensation der Aortenwurzelbewegung erfolgt durch Bewegungsverfolgung eines Pigtail-Katheters in Bildsequenzen ohne Kontrastmittel. Eine starrere Intensitätsbasierte Registrierungsmethode wurde verwendet, um kontinuierlich die Aortenwurzelbewegung in Bildsequenzen mit Kontrastmittelgabe zu detektieren. Die Aortenklappenprothese wird in die fluoroskopischen Bilder eingeblendet und dient dem Chirurg als Leitfaden für die richtige Platzierung der realen Prothese. Eine interaktive Benutzerschnittstelle für den Chirurg wurde zur Initialisierung der Systemsalgorithmen, zur Steuerung der Visualisierung und für manuelle Korrektur eventueller Überlagerungsfehler entwickelt. Retrospektive Experimente wurden an mehreren Patienten-Datensätze aus der klinischen Routine der TAKI in einem Hybrid-OP durchgeführt. Hohe Erfolgsraten des entwickelten Assistenzsystems wurden für alle getesteten Patienten-Datensätze erzielt. Die Ergebnisse zeigen, dass das entwickelte chirurgische Assistenzsystem ein hilfreiches Werkzeug für den Chirurg bei der Platzierung Position der Prothese während des chirurgischen Eingriffs der TAKI bietet

    Imaging skins: stretchable and conformable on-organ beta particle detectors for radioguided surgery

    Get PDF
    While radioguided surgery (RGS) traditionally relied on detecting gamma rays, direct detection of beta particles could facilitate the detection of tumour margins intraoperatively by reducing radiation noise emanating from distant organs, thereby improving the signal-to-noise ratio of the imaging technique. In addition, most existing beta detectors do not offer surface sensing or imaging capabilities. Therefore, we explore the concept of a stretchable scintillator to detect beta-particles emitting radiotracers that would be directly deployed on the targeted organ. Such detectors, which we refer to as imaging skins, would work as indirect radiation detectors made of light-emitting agents and biocompatible stretchable material. Our vision is to detect scintillation using standard endoscopes routinely employed in minimally invasive surgery. Moreover, surgical robotic systems would ideally be used to apply the imaging skins, allowing for precise control of each component, thereby improving positioning and task repeatability. While still in the exploratory stages, this innovative approach has the potential to improve the detection of tumour margins during RGS by enabling real-time imaging, ultimately improving surgical outcomes

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    A Review on Advances in Intra-operative Imaging for Surgery and Therapy: Imagining the Operating Room of the Future

    Get PDF
    none4openZaffino, Paolo; Moccia, Sara; De Momi, Elena; Spadea, Maria FrancescaZaffino, Paolo; Moccia, Sara; De Momi, Elena; Spadea, Maria Francesc

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart
    corecore