616 research outputs found

    Fusion of interventional ultrasound & X-ray

    Get PDF
    In einer immer älter werdenden Bevölkerung wird die Behandlung von strukturellen Herzkrankheiten zunehmend wichtiger. Verbesserte medizinische Bildgebung und die Einführung neuer Kathetertechnologien führten dazu, dass immer mehr herkömmliche chirurgische Eingriffe am offenen Herzen durch minimal invasive Methoden abgelöst werden. Diese modernen Interventionen müssen durch verschiedenste Bildgebungsverfahren navigiert werden. Hierzu werden hauptsächlich Röntgenfluoroskopie und transösophageale Echokardiografie (TEE) eingesetzt. Röntgen bietet eine gute Visualisierung der eingeführten Katheter, was essentiell für eine gute Navigation ist. TEE hingegen bietet die Möglichkeit der Weichteilgewebedarstellung und kann damit vor allem zur Darstellung von anatomischen Strukturen, wie z.B. Herzklappen, genutzt werden. Beide Modalitäten erzeugen Bilder in Echtzeit und werden für die erfolgreiche Durchführung minimal invasiver Herzchirurgie zwingend benötigt. Üblicherweise sind beide Systeme eigenständig und nicht miteinander verbunden. Es ist anzunehmen, dass eine Bildfusion beider Welten einen großen Vorteil für die behandelnden Operateure erzeugen kann, vor allem eine verbesserte Kommunikation im Behandlungsteam. Ebenso können sich aus der Anwendung heraus neue chirurgische Worfklows ergeben. Eine direkte Fusion beider Systeme scheint nicht möglich, da die Bilddaten eine zu unterschiedliche Charakteristik aufweisen. Daher kommt in dieser Arbeit eine indirekte Registriermethode zum Einsatz. Die TEE-Sonde ist während der Intervention ständig im Fluoroskopiebild sichtbar. Dadurch wird es möglich, die Sonde im Röntgenbild zu registrieren und daraus die 3D Position abzuleiten. Der Zusammenhang zwischen Ultraschallbild und Ultraschallsonde wird durch eine Kalibrierung bestimmt. In dieser Arbeit wurde die Methode der 2D-3D Registrierung gewählt, um die TEE Sonde auf 2D Röntgenbildern zu erkennen. Es werden verschiedene Beiträge präsentiert, welche einen herkömmlichen 2D-3D Registrieralgorithmus verbessern. Nicht nur im Bereich der Ultraschall-Röntgen-Fusion, sondern auch im Hinblick auf allgemeine Registrierprobleme. Eine eingeführte Methode ist die der planaren Parameter. Diese verbessert die Robustheit und die Registriergeschwindigkeit, vor allem während der Registrierung eines Objekts aus zwei nicht-orthogonalen Richtungen. Ein weiterer Ansatz ist der Austausch der herkömmlichen Erzeugung von sogenannten digital reconstructed radiographs. Diese sind zwar ein integraler Bestandteil einer 2D-3D Registrierung aber gleichzeitig sehr zeitaufwendig zu berechnen. Es führt zu einem erheblichen Geschwindigkeitsgewinn die herkömmliche Methode durch schnelles Rendering von Dreiecksnetzen zu ersetzen. Ebenso wird gezeigt, dass eine Kombination von schnellen lernbasierten Detektionsalgorithmen und 2D-3D Registrierung die Genauigkeit und die Registrierreichweite verbessert. Zum Abschluss werden die ersten Ergebnisse eines klinischen Prototypen präsentiert, welcher die zuvor genannten Methoden verwendet.Today, in an elderly community, the treatment of structural heart disease will become more and more important. Constant improvements of medical imaging technologies and the introduction of new catheter devices caused the trend to replace conventional open heart surgery by minimal invasive interventions. These advanced interventions need to be guided by different medical imaging modalities. The two main imaging systems here are X-ray fluoroscopy and Transesophageal  Echocardiography (TEE). While X-ray provides a good visualization of inserted catheters, which is essential for catheter navigation, TEE can display soft tissues, especially anatomical structures like heart valves. Both modalities provide real-time imaging and are necessary to lead minimal invasive heart surgery to success. Usually, the two systems are detached and not connected. It is conceivable that a fusion of both worlds can create a strong benefit for the physicians. It can lead to a better communication within the clinical team and can probably enable new surgical workflows. Because of the completely different characteristics of the image data, a direct fusion seems to be impossible. Therefore, an indirect registration of Ultrasound and X-ray images is used. The TEE probe is usually visible in the X-ray image during the described minimal-invasive interventions. Thereby, it becomes possible to register the TEE probe in the fluoroscopic images and to establish its 3D position. The relationship of the Ultrasound image to the Ultrasound probe is known by calibration. To register the TEE probe on 2D X-ray images, a 2D-3D registration approach is chosen in this thesis. Several contributions are presented, which are improving the common 2D-3D registration algorithm for the task of Ultrasound and X-ray fusion, but also for general 2D-3D registration problems. One presented approach is the introduction of planar parameters that increase robustness and speed during the registration of an object on two non-orthogonal views. Another approach is to replace the conventional generation of digital reconstructedradiographs, which is an integral part of 2D-3D registration but also a performance bottleneck, with fast triangular mesh rendering. This will result in a significant performance speed-up. It is also shown that a combination of fast learning-based detection algorithms with 2D-3D registration will increase the accuracy and the capture range, instead of employing them solely to the  registration/detection of a TEE probe. Finally, a first clinical prototype is presented which employs the presented approaches and first clinical results are shown

    Augmented Image-Guidance for Transcatheter Aortic Valve Implantation

    Get PDF
    The introduction of transcatheter aortic valve implantation (TAVI), an innovative stent-based technique for delivery of a bioprosthetic valve, has resulted in a paradigm shift in treatment options for elderly patients with aortic stenosis. While there have been major advancements in valve design and access routes, TAVI still relies largely on single-plane fluoroscopy for intraoperative navigation and guidance, which provides only gross imaging of anatomical structures. Inadequate imaging leading to suboptimal valve positioning contributes to many of the early complications experienced by TAVI patients, including valve embolism, coronary ostia obstruction, paravalvular leak, heart block, and secondary nephrotoxicity from contrast use. A potential method of providing improved image-guidance for TAVI is to combine the information derived from intra-operative fluoroscopy and TEE with pre-operative CT data. This would allow the 3D anatomy of the aortic root to be visualized along with real-time information about valve and prosthesis motion. The combined information can be visualized as a `merged\u27 image where the different imaging modalities are overlaid upon each other, or as an `augmented\u27 image, where the location of key target features identified on one image are displayed on a different imaging modality. This research develops image registration techniques to bring fluoroscopy, TEE, and CT models into a common coordinate frame with an image processing workflow that is compatible with the TAVI procedure. The techniques are designed to be fast enough to allow for real-time image fusion and visualization during the procedure, with an intra-procedural set-up requiring only a few minutes. TEE to fluoroscopy registration was achieved using a single-perspective TEE probe pose estimation technique. The alignment of CT and TEE images was achieved using custom-designed algorithms to extract aortic root contours from XPlane TEE images, and matching the shape of these contours to a CT-derived surface model. Registration accuracy was assessed on porcine and human images by identifying targets (such as guidewires or coronary ostia) on the different imaging modalities and measuring the correspondence of these targets after registration. The merged images demonstrated good visual alignment of aortic root structures, and quantitative assessment measured an accuracy of less than 1.5mm error for TEE-fluoroscopy registration and less than 6mm error for CT-TEE registration. These results suggest that the image processing techniques presented have potential for development into a clinical tool to guide TAVI. Such a tool could potentially reduce TAVI complications, reducing morbidity and mortality and allowing for a safer procedure

    INTERFACE DESIGN FOR A VIRTUAL REALITY-ENHANCED IMAGE-GUIDED SURGERY PLATFORM USING SURGEON-CONTROLLED VIEWING TECHNIQUES

    Get PDF
    Initiative has been taken to develop a VR-guided cardiac interface that will display and deliver information without affecting the surgeons’ natural workflow while yielding better accuracy and task completion time than the existing setup. This paper discusses the design process, the development of comparable user interface prototypes as well as an evaluation methodology that can measure user performance and workload for each of the suggested display concepts. User-based studies and expert recommendations are used in conjunction to es­ tablish design guidelines for our VR-guided surgical platform. As a result, a better understanding of autonomous view control, depth display, and use of virtual context, is attained. In addition, three proposed interfaces have been developed to allow a surgeon to control the view of the virtual environment intra-operatively. Comparative evaluation of the three implemented interface prototypes in a simulated surgical task scenario, revealed performance advantages for stereoscopic and monoscopic biplanar display conditions, as well as the differences between three types of control modalities. One particular interface prototype demonstrated significant improvement in task performance. Design recommendations are made for this interface as well as the others as we prepare for prospective development iterations

    REAL-TIME 4D ULTRASOUND RECONSTRUCTION FOR IMAGE-GUIDED INTRACARDIAC INTERVENTIONS

    Get PDF
    Image-guided therapy addresses the lack of direct vision associated with minimally- invasive interventions performed on the beating heart, but requires effective intraoperative imaging. Gated 4D ultrasound reconstruction using a tracked 2D probe generates a time-series of 3D images representing the beating heart over the cardiac cycle. These images have a relatively high spatial resolution and wide field of view, and ultrasound is easily integrated into the intraoperative environment. This thesis presents a real-time 4D ultrasound reconstruction system incorporated within an augmented reality environment for surgical guidance, whose incremental visualization reduces common acquisition errors. The resulting 4D ultrasound datasets are intended for visualization or registration to preoperative images. A human factors experiment demonstrates the advantages of real-time ultrasound reconstruction, and accuracy assessments performed both with a dynamic phantom and intraoperatively reveal RMS localization errors of 2.5-2.7 mm, and 0.8 mm, respectively. Finally, clinical applicability is demonstrated by both porcine and patient imaging

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    Development of a Surgical Assistance System for Guiding Transcatheter Aortic Valve Implantation

    Get PDF
    Development of image-guided interventional systems is growing up rapidly in the recent years. These new systems become an essential part of the modern minimally invasive surgical procedures, especially for the cardiac surgery. Transcatheter aortic valve implantation (TAVI) is a recently developed surgical technique to treat severe aortic valve stenosis in elderly and high-risk patients. The placement of stented aortic valve prosthesis is crucial and typically performed under live 2D fluoroscopy guidance. To assist the placement of the prosthesis during the surgical procedure, a new fluoroscopy-based TAVI assistance system has been developed. The developed assistance system integrates a 3D geometrical aortic mesh model and anatomical valve landmarks with live 2D fluoroscopic images. The 3D aortic mesh model and landmarks are reconstructed from interventional angiographic and fluoroscopic C-arm CT system, and a target area of valve implantation is automatically estimated using these aortic mesh models. Based on template-based tracking approach, the overlay of visualized 3D aortic mesh model, landmarks and target area of implantation onto fluoroscopic images is updated by approximating the aortic root motion from a pigtail catheter motion without contrast agent. A rigid intensity-based registration method is also used to track continuously the aortic root motion in the presence of contrast agent. Moreover, the aortic valve prosthesis is tracked in fluoroscopic images to guide the surgeon to perform the appropriate placement of prosthesis into the estimated target area of implantation. An interactive graphical user interface for the surgeon is developed to initialize the system algorithms, control the visualization view of the guidance results, and correct manually overlay errors if needed. Retrospective experiments were carried out on several patient datasets from the clinical routine of the TAVI in a hybrid operating room. The maximum displacement errors were small for both the dynamic overlay of aortic mesh models and tracking the prosthesis, and within the clinically accepted ranges. High success rates of the developed assistance system were obtained for all tested patient datasets. The results show that the developed surgical assistance system provides a helpful tool for the surgeon by automatically defining the desired placement position of the prosthesis during the surgical procedure of the TAVI.Die Entwicklung bildgeführter interventioneller Systeme wächst rasant in den letzten Jahren. Diese neuen Systeme werden zunehmend ein wesentlicher Bestandteil der technischen Ausstattung bei modernen minimal-invasiven chirurgischen Eingriffen. Diese Entwicklung gilt besonders für die Herzchirurgie. Transkatheter Aortenklappen-Implantation (TAKI) ist eine neue entwickelte Operationstechnik zur Behandlung der schweren Aortenklappen-Stenose bei alten und Hochrisiko-Patienten. Die Platzierung der Aortenklappenprothese ist entscheidend und wird in der Regel unter live-2D-fluoroskopischen Bildgebung durchgeführt. Zur Unterstützung der Platzierung der Prothese während des chirurgischen Eingriffs wurde in dieser Arbeit ein neues Fluoroskopie-basiertes TAKI Assistenzsystem entwickelt. Das entwickelte Assistenzsystem überlagert eine 3D-Geometrie des Aorten-Netzmodells und anatomischen Landmarken auf live-2D-fluoroskopische Bilder. Das 3D-Aorten-Netzmodell und die Landmarken werden auf Basis der interventionellen Angiographie und Fluoroskopie mittels eines C-Arm-CT-Systems rekonstruiert. Unter Verwendung dieser Aorten-Netzmodelle wird das Zielgebiet der Klappen-Implantation automatisch geschätzt. Mit Hilfe eines auf Template Matching basierenden Tracking-Ansatzes wird die Überlagerung des visualisierten 3D-Aorten-Netzmodells, der berechneten Landmarken und der Zielbereich der Implantation auf fluoroskopischen Bildern korrekt überlagert. Eine kompensation der Aortenwurzelbewegung erfolgt durch Bewegungsverfolgung eines Pigtail-Katheters in Bildsequenzen ohne Kontrastmittel. Eine starrere Intensitätsbasierte Registrierungsmethode wurde verwendet, um kontinuierlich die Aortenwurzelbewegung in Bildsequenzen mit Kontrastmittelgabe zu detektieren. Die Aortenklappenprothese wird in die fluoroskopischen Bilder eingeblendet und dient dem Chirurg als Leitfaden für die richtige Platzierung der realen Prothese. Eine interaktive Benutzerschnittstelle für den Chirurg wurde zur Initialisierung der Systemsalgorithmen, zur Steuerung der Visualisierung und für manuelle Korrektur eventueller Überlagerungsfehler entwickelt. Retrospektive Experimente wurden an mehreren Patienten-Datensätze aus der klinischen Routine der TAKI in einem Hybrid-OP durchgeführt. Hohe Erfolgsraten des entwickelten Assistenzsystems wurden für alle getesteten Patienten-Datensätze erzielt. Die Ergebnisse zeigen, dass das entwickelte chirurgische Assistenzsystem ein hilfreiches Werkzeug für den Chirurg bei der Platzierung Position der Prothese während des chirurgischen Eingriffs der TAKI bietet

    Enhanced Ultrasound Visualization for Procedure Guidance

    Get PDF
    Intra-cardiac procedures often involve fast-moving anatomic structures with large spatial extent and high geometrical complexity. Real-time visualization of the moving structures and instrument-tissue contact is crucial to the success of these procedures. Real-time 3D ultrasound is a promising modality for procedure guidance as it offers improved spatial orientation information relative to 2D ultrasound. Imaging rates at 30 fps enable good visualization of instrument-tissue interactions, far faster than the volumetric imaging alternatives (MR/CT). Unlike fluoroscopy, 3D ultrasound also allows better contrast of soft tissues, and avoids the use of ionizing radiation.Engineering and Applied Science

    Exploiting Temporal Image Information in Minimally Invasive Surgery

    Get PDF
    Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System

    DYNAMIC MEASUREMENT OF THREE-DIMENSIONAL MOTION FROM SINGLE-PERSPECTIVE TWO-DIMENSIONAL RADIOGRAPHIC PROJECTIONS

    Get PDF
    The digital evolution of the x-ray imaging modality has spurred the development of numerous clinical and research tools. This work focuses on the design, development, and validation of dynamic radiographic imaging and registration techniques to address two distinct medical applications: tracking during image-guided interventions, and the measurement of musculoskeletal joint kinematics. Fluoroscopy is widely employed to provide intra-procedural image-guidance. However, its planar images provide limited information about the location of surgical tools and targets in three-dimensional space. To address this limitation, registration techniques, which extract three-dimensional tracking and image-guidance information from planar images, were developed and validated in vitro. The ability to accurately measure joint kinematics in vivo is an important tool in studying both normal joint function and pathologies associated with injury and disease, however it still remains a clinical challenge. A technique to measure joint kinematics from single-perspective x-ray projections was developed and validated in vitro, using clinically available radiography equipmen
    corecore