26 research outputs found

    Automatic segmentation of wall structures from cardiac images

    Get PDF
    One important topic in medical image analysis is segmenting wall structures from different cardiac medical imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). This task is typically done by radiologists either manually or semi-automatically, which is a very time-consuming process. To reduce the laborious human efforts, automatic methods have become popular in this research. In this thesis, features insensitive to data variations are explored to segment the ventricles from CT images and extract the left atrium from MR images. As applications, the segmentation results are used to facilitate cardiac disease analysis. Specifically, 1. An automatic method is proposed to extract the ventricles from CT images by integrating surface decomposition with contour evolution techniques. In particular, the ventricles are first identified on a surface extracted from patient-specific image data. Then, the contour evolution is employed to refine the identified ventricles. The proposed method is robust to variations of ventricle shapes, volume coverages, and image quality. 2. A variational region-growing method is proposed to segment the left atrium from MR images. Because of the localized property of this formulation, the proposed method is insensitive to data variabilities that are hard to handle by globalized methods. 3. In applications, a geometrical computational framework is proposed to estimate the myocardial mass at risk caused by stenoses. In addition, the segmentation of the left atrium is used to identify scars for MR images of post-ablation.PhDCommittee Chair: Yezzi, Anthony; Committee Co-Chair: Tannenbaum, Allen; Committee Member: Egerstedt, Magnus ; Committee Member: Fedele, Francesco ; Committee Member: Stillman, Arthur; Committee Member: Vela,Patrici

    Coronary motion modelling for CTA to X-ray angiography registration

    Get PDF

    Coronary motion modelling for CTA to X-ray angiography registration

    Get PDF

    Deep Learning Cross-Phase Style Transfer for Motion Artifact Correction in Coronary Computed Tomography Angiography

    Get PDF
    Motion artifacts may occur in coronary computed tomography angiography (CCTA) due to the heartbeat and impede the clinician's diagnosis of coronary arterial diseases. Thus, motion artifact correction of the coronary artery is required to quantify the risk of disease more accurately. We present a novel method based on deep learning for motion artifact correction in CCTA. Because the image of the coronary artery without motion (the ground-truth data required in supervised deep learning) is medically unattainable, we apply a style transfer method to 2D image patches cropped from full-phase 4D computed tomography (CT) to synthesize these images. We then train a convolutional neural network (CNN) for motion artifact correction using this synthetic ground-truth (SynGT). During testing, the output motion-corrected 2D image patches of the trained network are reinserted into the 3D CT volume with volumetric interpolation. The proposed method is evaluated using both phantom and clinical data. A phantom study demonstrates comparable results to other methods in quantitative performance and outperforms those methods in computation time. For clinical data, a quantitative analysis based on metric measurements is presented that confirms the correction of motion artifacts. Moreover, an observer study finds that by applying the proposed method, motion artifacts are markedly reduced, and boundaries of the coronary artery are much sharper, with a strong inter-observer agreement (κ = 0.78). Finally, evaluations using commercial software on the original and resulting CT volumes of the proposed method reveal a considerable increase in tracked coronary artery length.ope

    Medical image analysis methods in MR/CT-imaged acute-subacute ischemic stroke lesion:Segmentation, prediction and insights into dynamic evolution simulation models. A critical appraisal

    Get PDF
    AbstractOver the last 15years, basic thresholding techniques in combination with standard statistical correlation-based data analysis tools have been widely used to investigate different aspects of evolution of acute or subacute to late stage ischemic stroke in both human and animal data. Yet, a wave of biology-dependent and imaging-dependent issues is still untackled pointing towards the key question: “how does an ischemic stroke evolve?” Paving the way for potential answers to this question, both magnetic resonance (MRI) and CT (computed tomography) images have been used to visualize the lesion extent, either with or without spatial distinction between dead and salvageable tissue. Combining diffusion and perfusion imaging modalities may provide the possibility of predicting further tissue recovery or eventual necrosis. Going beyond these basic thresholding techniques, in this critical appraisal, we explore different semi-automatic or fully automatic 2D/3D medical image analysis methods and mathematical models applied to human, animal (rats/rodents) and/or synthetic ischemic stroke to tackle one of the following three problems: (1) segmentation of infarcted and/or salvageable (also called penumbral) tissue, (2) prediction of final ischemic tissue fate (death or recovery) and (3) dynamic simulation of the lesion core and/or penumbra evolution. To highlight the key features in the reviewed segmentation and prediction methods, we propose a common categorization pattern. We also emphasize some key aspects of the methods such as the imaging modalities required to build and test the presented approach, the number of patients/animals or synthetic samples, the use of external user interaction and the methods of assessment (clinical or imaging-based). Furthermore, we investigate how any key difficulties, posed by the evolution of stroke such as swelling or reperfusion, were detected (or not) by each method. In the absence of any imaging-based macroscopic dynamic model applied to ischemic stroke, we have insights into relevant microscopic dynamic models simulating the evolution of brain ischemia in the hope to further promising and challenging 4D imaging-based dynamic models. By depicting the major pitfalls and the advanced aspects of the different reviewed methods, we present an overall critique of their performances and concluded our discussion by suggesting some recommendations for future research work focusing on one or more of the three addressed problems

    Segmentation of pelvic structures from preoperative images for surgical planning and guidance

    Get PDF
    Prostate cancer is one of the most frequently diagnosed malignancies globally and the second leading cause of cancer-related mortality in males in the developed world. In recent decades, many techniques have been proposed for prostate cancer diagnosis and treatment. With the development of imaging technologies such as CT and MRI, image-guided procedures have become increasingly important as a means to improve clinical outcomes. Analysis of the preoperative images and construction of 3D models prior to treatment would help doctors to better localize and visualize the structures of interest, plan the procedure, diagnose disease and guide the surgery or therapy. This requires efficient and robust medical image analysis and segmentation technologies to be developed. The thesis mainly focuses on the development of segmentation techniques in pelvic MRI for image-guided robotic-assisted laparoscopic radical prostatectomy and external-beam radiation therapy. A fully automated multi-atlas framework is proposed for bony pelvis segmentation in MRI, using the guidance of MRI AE-SDM. With the guidance of the AE-SDM, a multi-atlas segmentation algorithm is used to delineate the bony pelvis in a new \ac{MRI} where there is no CT available. The proposed technique outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. With the SDM of pelvis and its segmented surface, an accurate 3D pelvimetry system is designed and implemented to measure a comprehensive set of pelvic geometric parameters for the examination of the relationship between these parameters and the difficulty of robotic-assisted laparoscopic radical prostatectomy. This system can be used in both manual and automated manner with a user-friendly interface. A fully automated and robust multi-atlas based segmentation has also been developed to delineate the prostate in diagnostic MR scans, which have large variation in both intensity and shape of prostate. Two image analysis techniques are proposed, including patch-based label fusion with local appearance-specific atlases and multi-atlas propagation via a manifold graph on a database of both labeled and unlabeled images when limited labeled atlases are available. The proposed techniques can achieve more robust and accurate segmentation results than other multi-atlas based methods. The seminal vesicles are also an interesting structure for therapy planning, particularly for external-beam radiation therapy. As existing methods fail for the very onerous task of segmenting the seminal vesicles, a multi-atlas learning framework via random decision forests with graph cuts refinement has further been proposed to solve this difficult problem. Motivated by the performance of this technique, I further extend the multi-atlas learning to segment the prostate fully automatically using multispectral (T1 and T2-weighted) MR images via hybrid \ac{RF} classifiers and a multi-image graph cuts technique. The proposed method compares favorably to the previously proposed multi-atlas based prostate segmentation. The work in this thesis covers different techniques for pelvic image segmentation in MRI. These techniques have been continually developed and refined, and their application to different specific problems shows ever more promising results.Open Acces

    Automated recognition of lung diseases in CT images based on the optimum-path forest classifier

    Get PDF
    The World Health Organization estimated that around 300 million people have asthma, and 210 million people are affected by Chronic Obstructive Pulmonary Disease (COPD). Also, it is estimated that the number of deaths from COPD increased 30% in 2015 and COPD will become the third major cause of death worldwide by 2030. These statistics about lung diseases get worse when one considers fibrosis, calcifications and other diseases. For the public health system, the early and accurate diagnosis of any pulmonary disease is mandatory for effective treatments and prevention of further deaths. In this sense, this work consists in using information from lung images to identify and classify lung diseases. Two steps are required to achieve these goals: automatically extraction of representative image features of the lungs and recognition of the possible disease using a computational classifier. As to the first step, this work proposes an approach that combines Spatial Interdependence Matrix (SIM) and Visual Information Fidelity (VIF). Concerning the second step, we propose to employ a Gaussian-based distance to be used together with the optimum-path forest (OPF) classifier to classify the lungs under study as normal or with fibrosis, or even affected by COPD. Moreover, to confirm the robustness of OPF in this classification problem, we also considered Support Vector Machines and a Multilayer Perceptron Neural Network for comparison purposes. Overall, the results confirmed the good performance of the OPF configured with the Gaussian distance when applied to SIM- and VIF-based features. The performance scores achieved by the OPF classifier were as follows: average accuracy of 98.2%, total processing time of 117 microseconds in a common personal laptop, and F-score of 95.2% for the three classification classes. These results showed that OPF is a very competitive classifier, and suitable to be used for lung disease classification

    Rapid Segmentation Techniques for Cardiac and Neuroimage Analysis

    Get PDF
    Recent technological advances in medical imaging have allowed for the quick acquisition of highly resolved data to aid in diagnosis and characterization of diseases or to guide interventions. In order to to be integrated into a clinical work flow, accurate and robust methods of analysis must be developed which manage this increase in data. Recent improvements in in- expensive commercially available graphics hardware and General-Purpose Programming on Graphics Processing Units (GPGPU) have allowed for many large scale data analysis problems to be addressed in meaningful time and will continue to as parallel computing technology improves. In this thesis we propose methods to tackle two clinically relevant image segmentation problems: a user-guided segmentation of myocardial scar from Late-Enhancement Magnetic Resonance Images (LE-MRI) and a multi-atlas segmentation pipeline to automatically segment and partition brain tissue from multi-channel MRI. Both methods are based on recent advances in computer vision, in particular max-flow optimization that aims at solving the segmentation problem in continuous space. This allows for (approximately) globally optimal solvers to be employed in multi-region segmentation problems, without the particular drawbacks of their discrete counterparts, graph cuts, which typically present with metrication artefacts. Max-flow solvers are generally able to produce robust results, but are known for being computationally expensive, especially with large datasets, such as volume images. Additionally, we propose two new deformable registration methods based on Gauss-Newton optimization and smooth the resulting deformation fields via total-variation regularization to guarantee the problem is mathematically well-posed. We compare the performance of these two methods against four highly ranked and well-known deformable registration methods on four publicly available databases and are able to demonstrate a highly accurate performance with low run times. The best performing variant is subsequently used in a multi-atlas segmentation pipeline for the segmentation of brain tissue and facilitates fast run times for this computationally expensive approach. All proposed methods are implemented using GPGPU for a substantial increase in computational performance and so facilitate deployment into clinical work flows. We evaluate all proposed algorithms in terms of run times, accuracy, repeatability and errors arising from user interactions and we demonstrate that these methods are able to outperform established methods. The presented approaches demonstrate high performance in comparison with established methods in terms of accuracy and repeatability while largely reducing run times due to the employment of GPU hardware

    Machine Learning towards General Medical Image Segmentation

    Get PDF
    The quality of patient care associated with diagnostic radiology is proportionate to a physician\u27s workload. Segmentation is a fundamental limiting precursor to diagnostic and therapeutic procedures. Advances in machine learning aims to increase diagnostic efficiency to replace single applications with generalized algorithms. We approached segmentation as a multitask shape regression problem, simultaneously predicting coordinates on an object\u27s contour while jointly capturing global shape information. Shape regression models inherent point correlations to recover ambiguous boundaries not supported by clear edges and region homogeneity. Its capabilities was investigated using multi-output support vector regression (MSVR) on head and neck (HaN) CT images. Subsequently, we incorporated multiplane and multimodality spinal images and presented the first deep learning multiapplication framework for shape regression, the holistic multitask regression network (HMR-Net). MSVR and HMR-Net\u27s performance were comparable or superior to state-of-the-art algorithms. Multiapplication frameworks bridges any technical knowledge gaps and increases workflow efficiency
    corecore