828 research outputs found

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Open-source software for ultrasound-based guidance in spinal fusion surgery.

    Get PDF
    Spinal instrumentation and surgical manipulations may cause loss of navigation accuracy requiring an efficient re-alignment of the patient anatomy with pre-operative images during surgery. While intra-operative ultrasound (iUS) guidance has shown clear potential to reduce surgery time, compared with clinical computed tomography (CT) guidance, rapid registration aiming to correct for patient misalignment has not been addressed. In this article, we present an open-source platform for pedicle screw navigation using iUS imaging. The alignment method is based on rigid registration of CT to iUS vertebral images and has been designed for fast and fully automatic patient re-alignment in the operating room. Two steps are involved: first, we use the iUS probe's trajectory to achieve an initial coarse registration; then, the registration transform is refined by simultaneously optimizing gradient orientation alignment and mean of iUS intensities passing through the CT-defined posterior surface of the vertebra. We evaluated our approach on a lumbosacral section of a porcine cadaver with seven vertebral levels. We achieved a median target registration error of 1.47 mm (100% success rate, defined by a target registration error <2 mm) when applying the probe's trajectory initial alignment. The approach exhibited high robustness to partial visibility of the vertebra with success rates of 89.86% and 88.57% when missing either the left or right part of the vertebra and robustness to initial misalignments with a success rate of 83.14% for random starts within ±20° rotation and ±20 mm translation. Our graphics processing unit implementation achieves an efficient registration time under 8 s, which makes the approach suitable for clinical application

    Deep learning in medical image registration: introduction and survey

    Full text link
    Image registration (IR) is a process that deforms images to align them with respect to a reference space, making it easier for medical practitioners to examine various medical images in a standardized reference frame, such as having the same rotation and scale. This document introduces image registration using a simple numeric example. It provides a definition of image registration along with a space-oriented symbolic representation. This review covers various aspects of image transformations, including affine, deformable, invertible, and bidirectional transformations, as well as medical image registration algorithms such as Voxelmorph, Demons, SyN, Iterative Closest Point, and SynthMorph. It also explores atlas-based registration and multistage image registration techniques, including coarse-fine and pyramid approaches. Furthermore, this survey paper discusses medical image registration taxonomies, datasets, evaluation measures, such as correlation-based metrics, segmentation-based metrics, processing time, and model size. It also explores applications in image-guided surgery, motion tracking, and tumor diagnosis. Finally, the document addresses future research directions, including the further development of transformers

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts

    Computer Vision in the Surgical Operating Room

    Get PDF
    Background: Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. Summary: In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. Key Messages: With the increasing availability of surgical video sources and the convergence of technologiesaround video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic

    The state-of-the-art in ultrasound-guided spine interventions.

    Get PDF
    During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications

    Ultrasound-based navigated pedicle screw insertion without intraoperative radiation: feasibility study on porcine cadavers

    Get PDF
    BACKGROUND Navigation systems for spinal fusion surgery rely on intraoperative computed tomography (CT) or fluoroscopy imaging. Both expose patient, surgeons and operating room staff to significant amounts of radiation. Alternative methods involving intraoperative ultrasound (iUS) imaging have recently shown promise for image-to-patient registration. Yet, the feasibility and safety of iUS navigation in spinal fusion have not been demonstrated. PURPOSE To evaluate the accuracy of pedicle screw insertion in lumbar and thoracolumbar spinal fusion using a fully automated iUS navigation system. STUDY DESIGN Prospective porcine cadaver study. METHODS Five porcine cadavers were used to instrument the lumbar and thoracolumbar spine using posterior open surgery. During the procedure, iUS images were acquired and used to establish automatic registration between the anatomy and preoperative CT images. Navigation was performed with the preoperative CT using tracked instruments. The accuracy of the system was measured as the distance of manually collected points to the preoperative CT vertebral surface and compared against fiducial-based registration. A postoperative CT was acquired, and screw placements were manually verified. We report breach rates, as well as axial and sagittal screw deviations. RESULTS A total of 56 screws were inserted (5.50 mm diameter n=50, and 6.50 mm diameter n=6). Fifty-two screws were inserted safely without breach. Four screws (7.14%) presented a medial breach with an average deviation of 1.35±0.37 mm (all <2 mm). Two breaches were caused by 6.50 mm diameter screws, and two by 5.50 mm screws. For vertebrae instrumented with 5.50 mm screws, the average axial diameter of the pedicle was 9.29 mm leaving a 1.89 mm margin in the left and right pedicle. For vertebrae instrumented with 6.50 mm screws, the average axial diameter of the pedicle was 8.99 mm leaving a 1.24 mm error margin in the left and right pedicle. The average distance to the vertebral surface was 0.96 mm using iUS registration and 0.97 mm using fiducial-based registration. CONCLUSIONS We successfully implanted all pedicle screws in the thoracolumbar spine using the ultrasound-based navigation system. All breaches recorded were minor (<2 mm) and the breach rate (7.14%) was comparable to existing literature. More investigation is needed to evaluate consistency, reproducibility, and performance in surgical context. CLINICAL SIGNIFICANCE Intraoperative US-based navigation is feasible and practical for pedicle screw insertion in a porcine model. It might be used as a low-cost and radiation-free alternative to intraoperative CT and fluoroscopy in the future

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration
    • …
    corecore