63 research outputs found

    Medical Image Registration Using Deep Neural Networks

    Get PDF
    Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use. This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications

    A Novel System and Image Processing for Improving 3D Ultrasound-guided Interventional Cancer Procedures

    Get PDF
    Image-guided medical interventions are diagnostic and therapeutic procedures that focus on minimizing surgical incisions for improving disease management and reducing patient burden relative to conventional techniques. Interventional approaches, such as biopsy, brachytherapy, and ablation procedures, have been used in the management of cancer for many anatomical regions, including the prostate and liver. Needles and needle-like tools are often used for achieving planned clinical outcomes, but the increased dependency on accurate targeting, guidance, and verification can limit the widespread adoption and clinical scope of these procedures. Image-guided interventions that incorporate 3D information intraoperatively have been shown to improve the accuracy and feasibility of these procedures, but clinical needs still exist for improving workflow and reducing physician variability with widely applicable cost-conscience approaches. The objective of this thesis was to incorporate 3D ultrasound (US) imaging and image processing methods during image-guided cancer interventions in the prostate and liver to provide accessible, fast, and accurate approaches for clinical improvements. An automatic 2D-3D transrectal ultrasound (TRUS) registration algorithm was optimized and implemented in a 3D TRUS-guided system to provide continuous prostate motion corrections with sub-millimeter and sub-degree error in 36 ± 4 ms. An automatic and generalizable 3D TRUS prostate segmentation method was developed on a diverse clinical dataset of patient images from biopsy and brachytherapy procedures, resulting in errors at gold standard accuracy with a computation time of 0.62 s. After validation of mechanical and image reconstruction accuracy, a novel 3D US system for focal liver tumor therapy was developed to guide therapy applicators with 4.27 ± 2.47 mm error. The verification of applicators post-insertion motivated the development of a 3D US applicator segmentation approach, which was demonstrated to provide clinically feasible assessments in 0.246 ± 0.007 s. Lastly, a general needle and applicator tool segmentation algorithm was developed to provide accurate intraoperative and real-time insertion feedback for multiple anatomical locations during a variety of clinical interventional procedures. Clinical translation of these developed approaches has the potential to extend the overall patient quality of life and outcomes by improving detection rates and reducing local cancer recurrence in patients with prostate and liver cancer

    Vascular Segmentation Algorithms for Generating 3D Atherosclerotic Measurements

    Get PDF
    Atherosclerosis manifests as plaques within large arteries of the body and remains as a leading cause of mortality and morbidity in the world. Major cardiovascular events may occur in patients without known preexisting symptoms, thus it is important to monitor progression and regression of the plaque burden in the arteries for evaluating patient\u27s response to therapy. In this dissertation, our main focus is quantification of plaque burden from the carotid and femoral arteries, which are major sites for plaque formation, and are straight forward to image noninvasively due to their superficial location. Recently, 3D measurements of plaque burden have shown to be more sensitive to the changes of plaque burden than one-/two-dimensional measurements. However, despite the advancements of 3D noninvasive imaging technology with rapid acquisition capabilities, and the high sensitivity of the 3D plaque measurements of plaque burden, they are still not widely used due to the inordinate amount of time and effort required to delineate artery walls plus plaque boundaries to obtain 3D measurements from the images. Therefore, the objective of this dissertation is developing novel semi-automated segmentation methods to alleviate measurement burden from the observer for segmentation of the outer wall and lumen boundaries from: (1) 3D carotid ultrasound (US) images, (2) 3D carotid black-blood magnetic resonance (MR) images, and (3) 3D femoral black-blood MR images. Segmentation of the carotid lumen and outer wall from 3DUS images is a challenging task due to low image contrast, for which no method has been previously reported. Initially, we developed a 2D slice-wise segmentation algorithm based on the level set method, which was then extended to 3D. The 3D algorithm required fewer user interactions than manual delineation and the 2D method. The algorithm reduced user time by ≈79% (1.72 vs. 8.3 min) compared to manual segmentation for generating 3D-based measurements with high accuracy (Dice similarity coefficient (DSC)\u3e90%). Secondly, we developed a novel 3D multi-region segmentation algorithm, which simultaneously delineates both the carotid lumen and outer wall surfaces from MR images by evolving two coupled surfaces using a convex max-flow-based technique. The algorithm required user interaction only on a single transverse slice of the 3D image for generating 3D surfaces of the lumen and outer wall. The algorithm was parallelized using graphics processing units (GPU) to increase computational speed, thus reducing user time by 93% (0.78 vs. 12 min) compared to manual segmentation. Moreover, the algorithm yielded high accuracy (DSC \u3e 90%) and high precision (intra-observer CV \u3c 5.6% and inter-observer CV \u3c 6.6%). Finally, we developed and validated an algorithm based on convex max-flow formulation to segment the femoral arteries that enforces a tubular shape prior and an inter-surface consistency of the outer wall and lumen to maintain a minimum separation distance between the two surfaces. The algorithm required the observer to choose only about 11 points on its medial axis of the artery to yield the 3D surfaces of the lumen and outer wall, which reduced the operator time by 97% (1.8 vs. 70-80 min) compared to manual segmentation. Furthermore, the proposed algorithm reported DSC greater than 85% and small intra-observer variability (CV ≈ 6.69%). In conclusion, the development of robust semi-automated algorithms for generating 3D measurements of plaque burden may accelerate translation of 3D measurements to clinical trials and subsequently to clinical care

    Targeted prostate biopsy using statistical image analysis

    Get PDF
    Abstract-In this paper, a method for maximizing the probability of prostate cancer detection via biopsy is presented, by combining image analysis and optimization techniques. This method consists of three major steps. First, a statistical atlas of the spatial distribution of prostate cancer is constructed from histological images obtained from radical prostatectomy specimen. Second, a probabilistic optimization framework is employed to optimize the biopsy strategy, so that the probability of cancer detection is maximized under needle placement uncertainties. Finally, the optimized biopsy strategy generated in the atlas space is mapped to a specific patient space using an automated segmentation and elastic registration method. Cross-validation experiments showed that the predictive power of the optimized biopsy strategy for cancer detection reached the 94%-96% levels for 6-7 biopsy cores, which is significantly better than standard random-systematic biopsy protocols, thereby encouraging further investigation of optimized biopsy strategies in prospective clinical studies. Index Terms-Biopsy optimization, prostate cancer, spatial normalization, statistical image analysis

    Software and Hardware-based Tools for Improving Ultrasound Guided Prostate Brachytherapy

    Get PDF
    Minimally invasive procedures for prostate cancer diagnosis and treatment, including biopsy and brachytherapy, rely on medical imaging such as two-dimensional (2D) and three-dimensional (3D) transrectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for critical tasks such as target definition and diagnosis, treatment guidance, and treatment planning. Use of these imaging modalities introduces challenges including time-consuming manual prostate segmentation, poor needle tip visualization, and variable MR-US cognitive fusion. The objective of this thesis was to develop, validate, and implement software- and hardware-based tools specifically designed for minimally invasive prostate cancer procedures to overcome these challenges. First, a deep learning-based automatic 3D TRUS prostate segmentation algorithm was developed and evaluated using a diverse dataset of clinical images acquired during prostate biopsy and brachytherapy procedures. The algorithm significantly outperformed state-of-the-art fully 3D CNNs trained using the same dataset while a segmentation time of 0.62 s demonstrated a significant reduction compared to manual segmentation. Next, the impact of dataset size, image quality, and image type on segmentation performance using this algorithm was examined. Using smaller training datasets, segmentation accuracy was shown to plateau with as little as 1000 training images, supporting the use of deep learning approaches even when data is scarce. The development of an image quality grading scale specific to 3D TRUS images will allow for easier comparison between algorithms trained using different datasets. Third, a power Doppler (PD) US-based needle tip localization method was developed and validated in both phantom and clinical cases, demonstrating reduced tip error and variation for obstructed needles compared to conventional US. Finally, a surface-based MRI-3D TRUS deformable image registration algorithm was developed and implemented clinically, demonstrating improved registration accuracy compared to manual rigid registration and reduced variation compared to the current clinical standard of physician cognitive fusion. These generalizable and easy-to-implement tools have the potential to improve workflow efficiency and accuracy for minimally invasive prostate procedures

    Rapid Segmentation Techniques for Cardiac and Neuroimage Analysis

    Get PDF
    Recent technological advances in medical imaging have allowed for the quick acquisition of highly resolved data to aid in diagnosis and characterization of diseases or to guide interventions. In order to to be integrated into a clinical work flow, accurate and robust methods of analysis must be developed which manage this increase in data. Recent improvements in in- expensive commercially available graphics hardware and General-Purpose Programming on Graphics Processing Units (GPGPU) have allowed for many large scale data analysis problems to be addressed in meaningful time and will continue to as parallel computing technology improves. In this thesis we propose methods to tackle two clinically relevant image segmentation problems: a user-guided segmentation of myocardial scar from Late-Enhancement Magnetic Resonance Images (LE-MRI) and a multi-atlas segmentation pipeline to automatically segment and partition brain tissue from multi-channel MRI. Both methods are based on recent advances in computer vision, in particular max-flow optimization that aims at solving the segmentation problem in continuous space. This allows for (approximately) globally optimal solvers to be employed in multi-region segmentation problems, without the particular drawbacks of their discrete counterparts, graph cuts, which typically present with metrication artefacts. Max-flow solvers are generally able to produce robust results, but are known for being computationally expensive, especially with large datasets, such as volume images. Additionally, we propose two new deformable registration methods based on Gauss-Newton optimization and smooth the resulting deformation fields via total-variation regularization to guarantee the problem is mathematically well-posed. We compare the performance of these two methods against four highly ranked and well-known deformable registration methods on four publicly available databases and are able to demonstrate a highly accurate performance with low run times. The best performing variant is subsequently used in a multi-atlas segmentation pipeline for the segmentation of brain tissue and facilitates fast run times for this computationally expensive approach. All proposed methods are implemented using GPGPU for a substantial increase in computational performance and so facilitate deployment into clinical work flows. We evaluate all proposed algorithms in terms of run times, accuracy, repeatability and errors arising from user interactions and we demonstrate that these methods are able to outperform established methods. The presented approaches demonstrate high performance in comparison with established methods in terms of accuracy and repeatability while largely reducing run times due to the employment of GPU hardware

    Validation Strategies Supporting Clinical Integration of Prostate Segmentation Algorithms for Magnetic Resonance Imaging

    Get PDF
    Segmentation of the prostate in medical images is useful for prostate cancer diagnosis and therapy guidance. However, manual segmentation of the prostate is laborious and time-consuming, with inter-observer variability. The focus of this thesis was on accuracy, reproducibility and procedure time measurement for prostate segmentation on T2-weighted endorectal magnetic resonance imaging, and assessment of the potential of a computer-assisted segmentation technique to be translated to clinical practice for prostate cancer management. We collected an image data set from prostate cancer patients with manually-delineated prostate borders by one observer on all the images and by two other observers on a subset of images. We used a complementary set of error metrics to measure the different types of observed segmentation errors. We compared expert manual segmentation as well as semi-automatic and automatic segmentation approaches before and after manual editing by expert physicians. We recorded the time needed for user interaction to initialize the semi-automatic algorithm, algorithm execution, and manual editing as necessary. Comparing to manual segmentation, the measured errors for the algorithms compared favourably with observed differences between manual segmentations. The measured average editing times for the computer-assisted segmentation were lower than fully manual segmentation time, and the algorithms reduced the inter-observer variability as compared to manual segmentation. The accuracy of the computer-assisted approaches was near to or within the range of observed variability in manual segmentation. The recorded procedure time for prostate segmentation was reduced using computer-assisted segmentation followed by manual editing, compared to the time required for fully manual segmentation

    Integrated Segmentation and Interpolation of Sparse Data

    Get PDF
    This paper addresses the two inherently related problems of segmentation and interpolation of 3D and 4D sparse data by integrating integrate these stages in a level set framework. The method supports any spatial configurations of sets of 2D slices having arbitrary positions and orientations. We introduce a new level set scheme based on the interpolation of the level set function by radial basis functions. The proposed method is validated quantitatively and/or subjectively on artificial data and MRI and CT scans and is compared against the traditional sequential approach

    Video-based infant discomfort detection

    Get PDF
    • …
    corecore