10 research outputs found

    심장 컴퓨터 단층촬영 영상으로부터 경사도 보조 지역 능동 윤곽 모델을 이용한 심장 영역 자동 분할 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 2. 신영길.The heart is one of the most important human organs, and composed of complex structures. Computed tomography angiography (CTA), magnetic resonance imaging (MRI), and single photon emission computed tomography are widely used, non-invasive cardiac imaging modalities. Compared with other modalities, CTA can provide more detailed anatomic information of the heart chambers, vessels, and coronary arteries due to its higher spatial resolution. To obtain important morphological information of the heart, whole heart segmentation is necessary and it can be used for clinical diagnosis. In this paper, we propose a novel framework to segment the four chambers of the heart automatically. First, the whole heart is coarsely extracted. This is separated into the left and right parts using a geometric analysis based on anatomical information and a subsequent power watershed. Then, the proposed gradient-assisted localized active contour model (GLACM) refines the left and right sides of the heart segmentation accurately. Finally, the left and right sides of the heart are separated into atrium and ventricle by minimizing the proposed split energy function that determines the boundary between the atrium and ventricle based on the shape and intensity of the heart. The main challenge of heart segmentation is to extract four chambers from cardiac CTA which has weak edges or separators. To enhance the accuracy of the heart segmentation, we use region-based information and edge-based information for the robustness of the accuracy in heterogeneous region. Model-based method, which requires a number of training data and proper template model, has been widely used for heat segmentation. It is difficult to model those data, since training data should describe precise heart regions and the number of data should be high in order to produce more accurate segmentation results. Besides, the training data are required to be represented with remarkable features, which are generated by manual setting, and these features must have correspondence for each other. However in our proposed methods, the training data and template model is not necessary. Instead, we use edge, intensity and shape information from cardiac CTA for each chamber segmentation. The intensity information of CTA can be substituted for the shape information of the template model. In addition, we devised adaptive radius function and Gaussian-pyramid edge map for GLACM in order to utilize the edge information effectively and improve the accuracy of segmentation comparison with original localizing region-based active contour model (LACM). Since the radius of LACM affects the overall segmentation performance, we proposed an energy function for changing radius adaptively whether homogeneous or heterogeneous region. Also we proposed split energy function in order to segment four chambers of the heart in cardiac CT images and detects the valve of atrium and ventricle. In experimental results using twenty clinical datasets, the proposed method identified the four chambers accurately and efficiently. We also demonstrated that this approach can assist the cardiologist for the clinical investigations and functional analysis.Contents Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Dissertation Goal 7 1.3 Main Contribtions 9 1.4 Organization of the Dissertation 10 Chapter 2 Related Works 11 2.1 Medical Image Segmentation 11 2.1.1 Classic Methods 11 2.1.2 Variational Methods 15 2.1.3 Image Features of the Curve 21 2.1.4 Combinatorial Methods 25 2.1.5 Difficulty of Segmentation 30 2.2 Heart Segmentation 33 2.2.1 Non-Model-Based Segmentation 34 2.2.2 Unstatistical Model-Based Segmentation 35 2.2.3 Statistical Model-Based Segmentation 37 Chapter 3 Gradient-assisted Localized Active Contour Model 41 3.1 LACM 41 3.2 Gaussian-pyramid Edge Map 46 3.3 Adaptive Radius Function 50 3.4 LACM with Gaussian-pyramid Edge Map and Adaptive Radius Function 52 Chapter 4 Segmentation of Four Chambers of Heart 54 4.1 Overview 54 4.2 Segmentation of Whole Heart 56 4.3 Separation of Left and Right Sides of Heart 59 4.3.1 Extraction of Candidate Regions of LV and RV 60 4.3.2 Detection of Left and Right sides of Heart 62 4.4 Segmentation of Left and Right Sides of Heart 66 4.5 Separation of Atrium and Ventricle from Heart 69 4.5.1 Calculation of Principal Axes of Left and Right Sides of Heart 69 4.5.2 Detection of Separation Plane Using Split Energy Function 70 Chapter 5 Experiments 74 5.1 Performance Evaluation 74 5.2 Comparison with Conventional Method 79 5.3 Parametric Study 84 5.4 Computational Performance 85 Chapter 6 Conclusion 86 Bibliography 89Docto

    Segmentation of heart chambers in 2-D heart ultrasounds with deep learning

    Get PDF
    Echocardiography is a non-invasive image diagnosis technique where ultrasound waves are used to obtain an image or sequence of the structure and function of the heart. The segmentation of the heart chambers on ultrasound images is a task usually performed by experienced cardiologists, in which they delineate and extract the shape of both atriums and ventricles to obtain important indexes of a patient’s heart condition. However, this task is usually hard to perform accurately due to the poor image quality caused by the equipment and techniques used and due to the variability across different patients and pathologies. Therefore, medical image processing is needed in this particular case to avoid inaccuracy and obtain proper results. Over the last decade, several studies have proved that deep learning techniques are a possible solution to this problem, obtaining good results in automatic segmentation. The major problem with deep learning techniques in medical image processing is the lack of available data to train and test these architectures. In this work we have trained, validated, and tested a convolutional neural network based on the architecture of U-Net for 2D echocardiogram chamber segmentation. The data used for the training of the convolutional neural network was the B-Mode 4-chamber apical view Echogan dataset with data augmentation techniques applied. The novelty of this work is the hyperparameter and architecture optimizations to reduce the computation time while obtaining significant training and testing accuraciesObjectius de Desenvolupament Sostenible::3 - Salut i Benesta

    PMNet: a multi-branch and multi-scale semantic segmentation approach to water extraction from high-resolution remote sensing images with edge-cloud computing

    Get PDF
    In the field of remote sensing image interpretation, automatically extracting water body information from high-resolution images is a key task. However, facing the complex multi-scale features in high-resolution remote sensing images, traditional methods and basic deep convolutional neural networks are difficult to effectively capture the global spatial relationship of the target objects, resulting in incomplete, rough shape and blurred edges of the extracted water body information. Meanwhile, massive image data processing usually leads to computational resource overload and inefficiency. Fortunately, the local data processing capability of edge computing combined with the powerful computational resources of cloud centres can provide timely and efficient computation and storage for high-resolution remote sensing image segmentation. In this regard, this paper proposes PMNet, a lightweight deep learning network for edge-cloud collaboration, which utilises a pipelined multi-step aggregation method to capture image information at different scales and understand the relationships between remote pixels through horizontal and vertical spatial dimensions. Also, it adopts a combination of multiple decoding branches in the decoding stage instead of the traditional single decoding branch. The accuracy of the results is improved while reducing the consumption of system resources. The model obtained F1-score of 90.22 and 88.57 on Landsat-8 and GID remote sensing image datasets with low model complexity, which is better than other semantic segmentation models, highlighting the potential of mobile edge computing in processing massive high-resolution remote sensing image data

    Characterising Shape Variation in the Human Right Ventricle Using Statistical Shape Analysis: Preliminary Outcomes and Potential for Predicting Hypertension in a Clinical Setting

    Get PDF
    Variations in the shape of the human right ventricle (RV) have previously been shown to be predictive of heart function and long term prognosis in Pulmonary Hypertension (PH), a deadly disease characterised by high blood pressure in the pulmonary arteries. The extent to which ventricular shape is also affected by non-pathological features such as sex, body mass index (BMI) and age is explored in this thesis. If fundamental differences in the shape of a structurally normal RV exist, these might also impact the success of a predictive model. This thesis evaluates the extent to which non-pathological features affect the shape of the RV and determines the best ways, in terms of procedure and analysis, to adapt the model to consistently predict PH. It also identifies areas where the statistical shape analysis procedure is robust, and considers the extent to which specific, non-pathological, characteristics impact the diagnostic potential of the statistical shape model. Finally, recommendations are made on next steps in the development of a classification procedure for PH. The dataset was composed of clinically-obtained, cardiovascular magnetic resonance images (CMR) from two independent sources; The University of Pittsburgh Medical Center and Newcastle University. Shape change is assessed using a 3D statistical shape analysis technique, which topologically maps heart meshes through an harmonic mapping approach to create a unique shape function for each shape. Proper Orthogonal Decomposition (POD) was applied to the complete set of shape functions in order to determine and rank a set of shape features (i.e. modes and corresponding coefficients from the decomposition). MRI scanning protocol produced the most significant difference in shape; a shape mode associated with detail at the RV apex and ventricular length from apex to base strongly correlated with the MRI sequence used to record each subject. Qualitatively, a protocol which skipped slices produced a shorter RV with less detail at the apex. Decomposition of sex, age and BMI also derives unique RV shape descriptors which correspond to anatomically meaningful features. The shape features are shown to be able to predict presence of PH. The predictive model can be improved by including BMI as a factor, but these improvements are mainly concentrated in identification of healthy subjects

    Artificial Intelligence in Cardiac Magnetic Resonance Imaging to Predict Prognosis and Treatment Response

    Get PDF
    Background Pulmonary arterial hypertension (PAH) is a serious disease of the heart and lungs. Its impact on patients can be severe, including limitation of day-to-day activities and high mortality. The diagnosis, treatment and monitoring of PAH are challenging and there is a need for tools that can aid clinical decision-making to optimise patient outcomes. Cardiac MRI (CMR) provides both qualitative and quantitative information about cardiac function and is an important method for evaluating the severity of PAH. The application of machine learning (ML) tools is of growing interest in medical imaging. ML has the potential to automate complex and repetitive tasks, including the rapid segmentation of anatomical structures on images and extraction of clinically useful information. Aims This thesis proposes the combination of CMR with two different ML tools to predict prognosis and treatment response in PAH. The first ML tool involves the automated measurement of different cardiac parameters and assesses their utility in predicting prognosis and treatment response. The second ML tool involves the extraction of imaging features directly without the need for segmentation to predict the risk of mortality. My Contribution The ML models in this thesis were developed at the University of Sheffield in collaboration with Leiden University. Sheffield is a centre of excellence in PAH treatment thanks to the Sheffield Pulmonary Vascular Disease Unit, which is one of the largest internationally. Each year, more than 700 PAH patients undergo CMR for diagnosis and monitoring. Additionally, each newly diagnosed patient has accompanying in-depth clinical phenotypic data, including right heart catheterisation, exercise and pulmonary function tests, and quality of life assessment. During my research, I created and curated a dataset combining imaging and time-matched clinical data. I identified eligible CMR scans, landmarked and contoured cardiac chambers on multiple sequences and organised the collaboration with computer scientists at Leiden and Sheffield. I arranged image anonymisation, storage and transfer and advised computer scientists on the clinical relevance of CMR images. I performed quality control on ML analyses, collated their results, and analysed the data within clinical context. I have written all chapters in this thesis and clarified the roles of my co-authors at the end of each chapter. Thesis Outline Chapter 1 provided an overview of the growing role of CMR in the diagnosis and evaluation of PAH. Chapter 2 summarised the prognostic value of CMR measurements in the prediction of clinical worsening and mortality in PAH patients. Chapter 3 illustrated the rapid expansion of research using AI approaches to automate CMR measurements. The quality of the existing literature was reviewed, significant shortcomings in the transparency of studies were identified and solutions were recommended. Chapter 4 showed our experience in developing, validating and testing a fully automatic CMR segmentation tool. Our tool was developed in one of the largest multi-vendor, multi-centre and multi-pathology reported datasets, and included a large group of patients with right heart disease. We implemented the lessons learned in Chapter 3 and provided extensive descriptions of our datasets, ML model and performance. Our model showed excellent reliability, generalisability, agreement with CMR experts and correlation with invasive haemodynamics. Chapter 5 demonstrated that the automatic CMR measurements allowed assessment of patient-orientated outcomes and prediction of mortality. Thresholds of changes in CMR metrics were identified that could inform clinical decisions in the monitoring of PAH patients. Chapter 6 showed promising results of an ML tool to extrapolate prognostic CMR features with incremental value compared to clinical risk scores and volumetric CMR measurements. Finally, Chapter 7 showed that myocardial T1 mapping could potentially add diagnostic and prognostic value in PAH. Impact and Future Direction In addition to the known advantages of ML for providing rapid results with minimal human involvement, the ML tools developed in this thesis allow visualisation of outcomes and are transparent to the human assessor. ML applications to automate the measurement of CMR metrics and extract prognostic imaging features have potential to add clinical value by (i) streamlining prognostication, (ii) informing treatment selection, (iii) assisting the monitoring of treatment response and (iv) ultimately improving clinical decision-making and patient outcomes. Additionally, these tools could point to new CMR end-points for clinical trials, accelerating the development of new treatments for PAH. ML will likely elevate the role of CMR as a powerful prognostic modality in the years to come. Looking ahead, I hope to combine multi-source clinical, imaging and patient-orientated data from several ML tools into a single package to facilitate the assessment of cardiovascular disease

    Generalizable fully automated multi-label segmentation of four-chamber view echocardiograms based on deep convolutional adversarial networks.

    No full text
    A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers' reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data

    Fully automated quantification of cardiac chamber and function assessment in 2-D echocardiography: clinical feasibility of deep learning-based algorithms

    No full text
    We aimed to compare the segmentation performance of the current prominent deep learning (DL) algorithms with ground-truth segmentations and to validate the reproducibility of the manually created 2D echocardiographic four cardiac chamber ground-truth annotation. Recently emerged DL based fully-automated chamber segmentation and function assessment methods have shown great potential for future application in aiding image acquisition, quantification, and suggestion for diagnosis. However, the performance of current DL algorithms have not previously been compared with each other. In addition, the reproducibility of ground-truth annotations which are the basis of these algorithms have not yet been fully validated. We retrospectively enrolled 500 consecutive patients who underwent transthoracic echocardiogram (TTE) from December 2019 to December 2020. Simple U-net, Res-U-net, and Dense-U-net algorithms were compared for the segmentation performances and clinical indices such as left atrial volume (LAV), left ventricular end diastolic volume (LVEDV), left ventricular end systolic volume (LVESV), LV mass, and ejection fraction (EF) were evaluated. The inter- and intra-observer variability analysis was performed by two expert sonographers for a randomly selected echocardiographic view in 100 patients (apical 2-chamber, apical 4-chamber, and parasternal short axis views). The overall performance of all DL methods was excellent [average dice similarity coefficient (DSC) 0.91 to 0.95 and average Intersection over union (IOU) 0.83 to 0.90], with the exception of LV wall area on PSAX view (average DSC of 0.83, IOU 0.72). In addition, there were no significant difference in clinical indices between ground truth and automated DL measurements. For inter- and intra-observer variability analysis, the overall intra observer reproducibility was excellent: LAV (ICC = 0.995), LVEDV (ICC = 0.996), LVESV (ICC = 0.997), LV mass (ICC = 0.991) and EF (ICC = 0.984). The inter-observer reproducibility was slightly lower as compared to intraobserver agreement: LAV (ICC = 0.976), LVEDV (ICC = 0.982), LVESV (ICC = 0.970), LV mass (ICC = 0.971), and EF (ICC = 0.899). The three current prominent DL-based fully automated methods are able to reliably perform four-chamber segmentation and quantification of clinical indices. Furthermore, we were able to validate the four cardiac chamber ground-truth annotation and demonstrate an overall excellent reproducibility, but still with some degree of inter-observer variability.restrictio

    Automatic and interactive segmentations using deformable and graphical models

    No full text
    Image segmentation i.e. dividing an image into regions and categories is a classic yet still challenging problem. The key to success is to use/develop the right method for the right appli- cation. In this dissertation, we aim to develop automatic and interactive segmentation methods for different types of tissues that are acquired at different scales and resolutions from different medical imaging modalities such as Magnetic Resonance (MR), Computed Tomography (CT) and Electron Microscopy (EM) imaging. First, we developed an automated segmentation method for segmenting multiple organs simultaneously from MR and CT images. We propose a hybrid method that takes advantage of two well known energy-minimization-based approaches combined in a unified framework. We validate this proposed method on cardiac four-chamber segmentation from CT and knee joint bones segmentation from MR images. We compare our method with other existing techniques and show certain improvements and advantages. Second, we developed a graph partitioning algorithm for characterizing neuronal tissue structurally and contextually from EM images. We propose a multistage decision mechanism that utilizes differential geometric properties of objects in a cellular processing context. Our results indicate that this proposed approach can successfully partition images into structured segments with minimal expert supervision and can potentially form a basis for a larger scale volumetric data interpretation. We compare our method with other proposed methods in a workshop challenge and show promising results. Third, we developed an efficient learning-based method for segmentation of neuron struc- tures from 2D and 3D EM images. We propose a graphical-model-based framework to do inference on hierarchical merge-tree of image regions. In particular, we extract the hierarchy of regions in the low level, design 2D and 3D discriminative features to extract higher level information and utilize a Conditional Random Field based parameter learning on top of it. The effectiveness of the proposed method in 2D is demonstrated by comparing our method with other methods in a workshop challenge. Our method outperforms all participant methods ex- cept one. In 3D, we compare our method to existing methods and show that the accuracy of our results are comparable to state-of-the-art while being much more efficient. Finally, we extended our inference algorithm to a proofreading framework for manual cor- rections of automatic segmentation results. We propose a very efficient and easy-to-use inter- face for high resolution 3D EM images. In particular, we utilize the probabilistic confidence level of the graphical model to guide the user during interaction. We validate the effective- ness of this framework by robot simulations and demonstrate certain advantages compared to baseline methods.Ph.D.Includes bibliographical referencesby Mustafa Gokhan Uzunba
    corecore