25 research outputs found

    Fusion of interventional ultrasound & X-ray

    Get PDF
    In einer immer älter werdenden Bevölkerung wird die Behandlung von strukturellen Herzkrankheiten zunehmend wichtiger. Verbesserte medizinische Bildgebung und die Einführung neuer Kathetertechnologien führten dazu, dass immer mehr herkömmliche chirurgische Eingriffe am offenen Herzen durch minimal invasive Methoden abgelöst werden. Diese modernen Interventionen müssen durch verschiedenste Bildgebungsverfahren navigiert werden. Hierzu werden hauptsächlich Röntgenfluoroskopie und transösophageale Echokardiografie (TEE) eingesetzt. Röntgen bietet eine gute Visualisierung der eingeführten Katheter, was essentiell für eine gute Navigation ist. TEE hingegen bietet die Möglichkeit der Weichteilgewebedarstellung und kann damit vor allem zur Darstellung von anatomischen Strukturen, wie z.B. Herzklappen, genutzt werden. Beide Modalitäten erzeugen Bilder in Echtzeit und werden für die erfolgreiche Durchführung minimal invasiver Herzchirurgie zwingend benötigt. Üblicherweise sind beide Systeme eigenständig und nicht miteinander verbunden. Es ist anzunehmen, dass eine Bildfusion beider Welten einen großen Vorteil für die behandelnden Operateure erzeugen kann, vor allem eine verbesserte Kommunikation im Behandlungsteam. Ebenso können sich aus der Anwendung heraus neue chirurgische Worfklows ergeben. Eine direkte Fusion beider Systeme scheint nicht möglich, da die Bilddaten eine zu unterschiedliche Charakteristik aufweisen. Daher kommt in dieser Arbeit eine indirekte Registriermethode zum Einsatz. Die TEE-Sonde ist während der Intervention ständig im Fluoroskopiebild sichtbar. Dadurch wird es möglich, die Sonde im Röntgenbild zu registrieren und daraus die 3D Position abzuleiten. Der Zusammenhang zwischen Ultraschallbild und Ultraschallsonde wird durch eine Kalibrierung bestimmt. In dieser Arbeit wurde die Methode der 2D-3D Registrierung gewählt, um die TEE Sonde auf 2D Röntgenbildern zu erkennen. Es werden verschiedene Beiträge präsentiert, welche einen herkömmlichen 2D-3D Registrieralgorithmus verbessern. Nicht nur im Bereich der Ultraschall-Röntgen-Fusion, sondern auch im Hinblick auf allgemeine Registrierprobleme. Eine eingeführte Methode ist die der planaren Parameter. Diese verbessert die Robustheit und die Registriergeschwindigkeit, vor allem während der Registrierung eines Objekts aus zwei nicht-orthogonalen Richtungen. Ein weiterer Ansatz ist der Austausch der herkömmlichen Erzeugung von sogenannten digital reconstructed radiographs. Diese sind zwar ein integraler Bestandteil einer 2D-3D Registrierung aber gleichzeitig sehr zeitaufwendig zu berechnen. Es führt zu einem erheblichen Geschwindigkeitsgewinn die herkömmliche Methode durch schnelles Rendering von Dreiecksnetzen zu ersetzen. Ebenso wird gezeigt, dass eine Kombination von schnellen lernbasierten Detektionsalgorithmen und 2D-3D Registrierung die Genauigkeit und die Registrierreichweite verbessert. Zum Abschluss werden die ersten Ergebnisse eines klinischen Prototypen präsentiert, welcher die zuvor genannten Methoden verwendet.Today, in an elderly community, the treatment of structural heart disease will become more and more important. Constant improvements of medical imaging technologies and the introduction of new catheter devices caused the trend to replace conventional open heart surgery by minimal invasive interventions. These advanced interventions need to be guided by different medical imaging modalities. The two main imaging systems here are X-ray fluoroscopy and Transesophageal  Echocardiography (TEE). While X-ray provides a good visualization of inserted catheters, which is essential for catheter navigation, TEE can display soft tissues, especially anatomical structures like heart valves. Both modalities provide real-time imaging and are necessary to lead minimal invasive heart surgery to success. Usually, the two systems are detached and not connected. It is conceivable that a fusion of both worlds can create a strong benefit for the physicians. It can lead to a better communication within the clinical team and can probably enable new surgical workflows. Because of the completely different characteristics of the image data, a direct fusion seems to be impossible. Therefore, an indirect registration of Ultrasound and X-ray images is used. The TEE probe is usually visible in the X-ray image during the described minimal-invasive interventions. Thereby, it becomes possible to register the TEE probe in the fluoroscopic images and to establish its 3D position. The relationship of the Ultrasound image to the Ultrasound probe is known by calibration. To register the TEE probe on 2D X-ray images, a 2D-3D registration approach is chosen in this thesis. Several contributions are presented, which are improving the common 2D-3D registration algorithm for the task of Ultrasound and X-ray fusion, but also for general 2D-3D registration problems. One presented approach is the introduction of planar parameters that increase robustness and speed during the registration of an object on two non-orthogonal views. Another approach is to replace the conventional generation of digital reconstructedradiographs, which is an integral part of 2D-3D registration but also a performance bottleneck, with fast triangular mesh rendering. This will result in a significant performance speed-up. It is also shown that a combination of fast learning-based detection algorithms with 2D-3D registration will increase the accuracy and the capture range, instead of employing them solely to the  registration/detection of a TEE probe. Finally, a first clinical prototype is presented which employs the presented approaches and first clinical results are shown

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    ROBUST DEEP LEARNING METHODS FOR SOLVING INVERSE PROBLEMS IN MEDICAL IMAGING

    Get PDF
    The medical imaging field has a long history of incorporating machine learning algorithms to address inverse problems in image acquisition and analysis. With the impressive successes of deep neural networks on natural images, we seek to answer the obvious question: do these successes also transfer to the medical image domain? The answer may seem straightforward on the surface. Tasks like image-to-image transformation, segmentation, detection, etc., have direct applications for medical images. For example, metal artifact reduction for Computed Tomography (CT) and reconstruction from undersampled k-space signal for Magnetic Resonance (MR) imaging can be formulated as an image-to-image transformation; lesion/tumor detection and segmentation are obvious applications for higher level vision tasks. While these tasks may be similar in formulation, many practical constraints and requirements exist in solving these tasks for medical images. Patient data is highly sensitive and usually only accessible from individual institutions. This creates constraints on the available groundtruth, dataset size, and computational resources in these institutions to train performant models. Due to the mission-critical nature in healthcare applications, requirements such as performance robustness and speed are also stringent. As such, the big-data, dense-computation, supervised learning paradigm in mainstream deep learning is often insufficient to address these situations. In this dissertation, we investigate ways to benefit from the powerful representational capacity of deep neural networks while still satisfying the above-mentioned constraints and requirements. The first part of this dissertation focuses on adapting supervised learning to account for variations such as different medical image modality, image quality, architecture designs, tasks, etc. The second part of this dissertation focuses on improving model robustness on unseen data through domain adaptation, which ameliorates performance degradation due to distribution shifts. The last part of this dissertation focuses on self-supervised learning and learning from synthetic data with a focus in tomographic imaging; this is essential in many situations where the desired groundtruth may not be accessible

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    Characterising pattern asymmetry in pigmented skin lesions

    Get PDF
    Abstract. In clinical diagnosis of pigmented skin lesions asymmetric pigmentation is often indicative of melanoma. This paper describes a method and measures for characterizing lesion symmetry. The estimate of mirror symmetry is computed first for a number of axes at different degrees of rotation with respect to the lesion centre. The statistics of these estimates are the used to assess the overall symmetry. The method is applied to three different lesion representations showing the overall pigmentation, the pigmentation pattern, and the pattern of dermal melanin. The best measure is a 100% sensitive and 96% specific indicator of melanoma on a test set of 33 lesions, with a separate training set consisting of 66 lesions

    Differential geometry methods for biomedical image processing : from segmentation to 2D/3D registration

    Get PDF
    This thesis establishes a biomedical image analysis framework for the advanced visualization of biological structures. It consists of two important parts: 1) the segmentation of some structures of interest in 3D medical scans, and 2) the registration of patient-specific 3D models with 2D interventional images. Segmenting biological structures results in 3D computational models that are simple to visualize and that can be analyzed quantitatively. Registering a 3D model with interventional images permits to position the 3D model within the physical world. By combining the information from a 3D model and 2D interventional images, the proposed framework can improve the guidance of surgical intervention by reducing the ambiguities inherent to the interpretation of 2D images. Two specific segmentation problems are considered: 1) the segmentation of large structures with low frequency intensity nonuniformity, and 2) the detection of fine curvilinear structures. First, we directed our attention toward the segmentation of relatively large structures with low frequency intensity nonuniformity. Such structures are important in medical imaging since they are commonly encountered in MRI. Also, the nonuniform diffusion of the contrast agent in some other modalities, such as CTA, leads to structures of nonuniform appearance. A level-set method that uses a local-linear region model is defined, and applied to the challenging problem of segmenting brain tissues in MRI. The unique characteristics of the proposed method permit to account for important image nonuniformity implicitly. To the best of our knowledge, this is the first time a region-based level-set model has been used to perform the segmentation of real world MRI brain scans with convincing results. The second segmentation problem considered is the detection of fine curvilinear structures in 3D medical images. Detecting those structures is crucial since they can represent veins, arteries, bronchi or other important tissues. Unfortunately, most currently available curvilinear structure detection filters incur significant signal lost at bifurcations of two structures. This peculiarity limits the performance of all subsequent processes, whether it be understanding an angiography acquisition, computing an accurate tractography, or automatically classifying the image voxels. This thesis presents a new curvilinear structure detection filter that is robust to the presence of X- and Y-junctions. At the same time, it is conceptually simple and deterministic, and allows for an intuitive representation of the structure’s principal directions. Once a 3D computational model is available, it can be used to enhance surgical guidance. A 2D/3D non-rigid method is proposed that brings a 3D centerline model of the coronary arteries into correspondence with bi-plane fluoroscopic angiograms. The registered model is overlaid on top of the interventional angiograms to provide surgical assistance during image-guided chronic total occlusion procedures, which reduces the uncertainty inherent in 2D interventional images. A fully non-rigid registration model is proposed and used to compensate for any local shape discrepancy. This method is based on a variational framework, and uses a simultaneous matching and reconstruction process. With a typical run time of less than 3 seconds, the algorithms are fast enough for interactive applications
    corecore