11 research outputs found
Pulmonary Lobe Segmentation with Probabilistic Segmentation of the Fissures and a Groupwise Fissure Prior
A fully automated, unsupervised lobe segmentation algorithm is presented based on a probabilistic segmentation of the fissures and the simultaneous construction of a population model of the fissures. A two-class probabilistic segmentation segments the lung into candidate fissure voxels and the surrounding parenchyma. This was combined with anatomical information and a groupwise fissure prior to drive non-parametric surface fitting to obtain the final segmentation. The performance of our fissure segmentation was validated on 30 patients from the COPDGene cohort, achieving a high median F1-score of 0:90 and showed general insensitivity to filter parameters. We evaluated our lobe segmentation algorithm on the LOLA11 dataset, which contains 55 cases at varying levels of pathology. We achieved the highest score of 0:884 of the automated algorithms. Our method was further tested quantitatively and qualitatively on 80 patients from the COPDGene study at varying levels of functional impairment. Accurate segmentation of the lobes is shown at various degrees of fissure incompleteness for 96% of all cases. We also show the utility of including a groupwise prior in segmenting the lobes in regions of grossly incomplete fissures
Quantitative lung CT analysis for the study and diagnosis of Chronic Obstructive Pulmonary Disease
The importance of medical imaging in the research of Chronic Obstructive Pulmonary Dis- ease (COPD) has risen over the last decades. COPD affects the pulmonary system through two competing mechanisms; emphysema and small airways disease. The relative contribu- tion of each component varies widely across patients whilst they can also evolve regionally in the lung. Patients can also be susceptible to exacerbations, which can dramatically ac- celerate lung function decline. Diagnosis of COPD is based on lung function tests, which measure airflow limitation. There is a growing consensus that this is inadequate in view of the complexities of COPD. Computed Tomography (CT) facilitates direct quantification of the pathological changes that lead to airflow limitation and can add to our understanding of the disease progression of COPD. There is a need to better capture lung pathophysiology whilst understanding regional aspects of disease progression. This has motivated the work presented in this thesis. Two novel methods are proposed to quantify the severity of COPD from CT by analysing the global distribution of features sampled locally in the lung. They can be exploited in the classification of lung CT images or to uncover potential trajectories of disease progression. A novel lobe segmentation algorithm is presented that is based on a probabilistic segmen- tation of the fissures whilst also constructing a groupwise fissure prior. In combination with the local sampling methods, a pipeline of analysis was developed that permits a re- gional analysis of lung disease. This was applied to study exacerbation susceptible COPD. Lastly, the applicability of performing disease progression modelling to study COPD has been shown. Two main subgroups of COPD were found, which are consistent with current clinical knowledge of COPD subtypes. This research may facilitate precise phenotypic characterisation of COPD from CT, which will increase our understanding of its natural history and associated heterogeneities. This will be instrumental in the precision medicine of COPD
Automated Segmentation of Pulmonary Lobes using Coordination-Guided Deep Neural Networks
The identification of pulmonary lobes is of great importance in disease
diagnosis and treatment. A few lung diseases have regional disorders at lobar
level. Thus, an accurate segmentation of pulmonary lobes is necessary. In this
work, we propose an automated segmentation of pulmonary lobes using
coordination-guided deep neural networks from chest CT images. We first employ
an automated lung segmentation to extract the lung area from CT image, then
exploit volumetric convolutional neural network (V-net) for segmenting the
pulmonary lobes. To reduce the misclassification of different lobes, we
therefore adopt coordination-guided convolutional layers (CoordConvs) that
generate additional feature maps of the positional information of pulmonary
lobes. The proposed model is trained and evaluated on a few publicly available
datasets and has achieved the state-of-the-art accuracy with a mean Dice
coefficient index of 0.947 0.044.Comment: ISBI 2019 (Oral
Relational Modeling for Robust and Efficient Pulmonary Lobe Segmentation in CT Scans
Pulmonary lobe segmentation in computed tomography scans is essential for
regional assessment of pulmonary diseases. Recent works based on convolution
neural networks have achieved good performance for this task. However, they are
still limited in capturing structured relationships due to the nature of
convolution. The shape of the pulmonary lobes affect each other and their
borders relate to the appearance of other structures, such as vessels, airways,
and the pleural wall. We argue that such structural relationships play a
critical role in the accurate delineation of pulmonary lobes when the lungs are
affected by diseases such as COVID-19 or COPD.
In this paper, we propose a relational approach (RTSU-Net) that leverages
structured relationships by introducing a novel non-local neural network
module. The proposed module learns both visual and geometric relationships
among all convolution features to produce self-attention weights.
With a limited amount of training data available from COVID-19 subjects, we
initially train and validate RTSU-Net on a cohort of 5000 subjects from the
COPDGene study (4000 for training and 1000 for evaluation). Using models
pre-trained on COPDGene, we apply transfer learning to retrain and evaluate
RTSU-Net on 470 COVID-19 suspects (370 for retraining and 100 for evaluation).
Experimental results show that RTSU-Net outperforms three baselines and
performs robustly on cases with severe lung infection due to COVID-19
Deep Learning of Unified Region, Edge, and Contour Models for Automated Image Segmentation
Image segmentation is a fundamental and challenging problem in computer
vision with applications spanning multiple areas, such as medical imaging,
remote sensing, and autonomous vehicles. Recently, convolutional neural
networks (CNNs) have gained traction in the design of automated segmentation
pipelines. Although CNN-based models are adept at learning abstract features
from raw image data, their performance is dependent on the availability and
size of suitable training datasets. Additionally, these models are often unable
to capture the details of object boundaries and generalize poorly to unseen
classes. In this thesis, we devise novel methodologies that address these
issues and establish robust representation learning frameworks for
fully-automatic semantic segmentation in medical imaging and mainstream
computer vision. In particular, our contributions include (1) state-of-the-art
2D and 3D image segmentation networks for computer vision and medical image
analysis, (2) an end-to-end trainable image segmentation framework that unifies
CNNs and active contour models with learnable parameters for fast and robust
object delineation, (3) a novel approach for disentangling edge and texture
processing in segmentation networks, and (4) a novel few-shot learning model in
both supervised settings and semi-supervised settings where synergies between
latent and image spaces are leveraged to learn to segment images given limited
training data.Comment: PhD dissertation, UCLA, 202
Recommended from our members
From Fully-Supervised, Single-Task to Scarcely-Supervised, Multi-Task Deep Learning for Medical Image Analysis
Image analysis based on machine learning has gained prominence with the advent of deep learning, particularly in medical imaging. To be effective in addressing challenging image analysis tasks, however, conventional deep neural networks require large corpora of annotated training data, which are unfortunately scarce in the medical domain, thus often rendering fully-supervised learning strategies ineffective.This thesis devises for use in a variety of medical image analysis applications a series of novel deep learning methods, ranging from fully-supervised, single-task learning to scarcely-supervised, multi-task learning that makes efficient use of annotated training data. Specifically, its main contributions include (1) fully-supervised, single-task learning for the segmentation of pulmonary lobes from chest CT scans and the analysis of scoliosis from spine X-ray images; (2) supervised, single-task, domain-generalized pulmonary segmentation in chest X-ray images and retinal vasculature segmentation in fundoscopic images; (3) largely-unsupervised, multiple-task learning via deep generative modeling for the joint synthesis and classification of medical image data; and (4) partly-supervised, multiple-task learning for the combined segmentation and classification of chest and spine X-ray images
Extended Quantitative Computed Tomography Analysis of Lung Structure and Function
Computed tomography (CT) imaging and quantitative CT (QCT) analysis for the study of lung health and disease have been rapidly advanced during the past decades, along with the employment of CT-based computational fluid dynamics (CFD) and machine learning approaches. The work presented in this thesis was devoted to extending the QCT analysis framework from three different perspectives.First, to extend the advanced QCT analysis to more data with undesirably protocolized CT scans, we developed a new deep learning-based automated segmentation of pulmonary lobes, in- corporating z-axis information into the conventional UNet segmentation. The proposed deep learn- ing segmentation, named ZUNet, was successfully applied for QCT analysis of silicosis patients with thick (5 or 10 mm) slices, which used to be excluded in QCT analysis since three-dimensional (3D) volumetric segmentation of the lungs and lobes were hardly successful or not automated. ZUNet outperformed UNet in lobe segmentation of human lungs. In addition, we extended the application of the QCT framework, combining CFD simulations for the entire subjects of the QCT analysis. One-dimensional (1D) CFD simulations of tidal breath- ing have been added to the inspiratory-expiratory CT image matching analysis of 66 asthma pa- tients (M:F=23:43, age=64.4±10.7) for pre- and post-bronchodilator comparison. We aimed to characterize comprehensive airway and lung structure and function relationship in the entire group response and patient-specific response to the bronchodilator. Along with the evidence of large air- way dilatation in the entire asthmatics, the CFD analysis revealed that improvements in regional flow rate fraction, particularly in the right lower lobe (RLL), airway pressure drop, airway resis- tance, and workload of breathing were significantly associated with the degree of large airway dilatation. Finally, we extended the approach using machine learning analysis to integrate numerous QCT variables with clinical features and additional information such as environmental exposure. In pursuit of investigating the effects of particulate matter (PM) exposure on human lung struc- ture and function alteration, principal component analysis (PCA) and k-means clustering iden- tified low, mid, and high exposure groups from directly measured air pollution exposure data of 270 healthy (age=68±10, M:F=15:51), asthma (age=60±12, M:F=39:56), chronic obstructive pulmonary disease (COPD) (age=69±7, M:F=66:10), and idiopathic pulmonary fibrosis (IPF) (age=72±7, M:F=43:10) subjects. Based on the exposure clusters, the RLL segmental airway narrowing was observed in the high exposure group. Various associations were found between the exposure data and about 200 multiscale lung features, from quantitative inspiratory and ex- piratory CT image matching and 1D CFD tidal breathing simulations. To highlight, small PM increases small airway disease in asthma. PM at all sizes decreases inspiratory low attenuation area in COPD and diseases luminal diameter of the RLL segmental airways in IPF
Efficient dense non-rigid registration using the free-form deformation framework
Medical image registration consists of finding spatial correspondences between two images or more. It
is a powerful tool which is commonly used in various medical image processing tasks. Even though
medical image registration has been an active topic of research for the last two decades, significant
challenges in the field remain to be solved. This thesis addresses some of these challenges through
extensions to the Free-Form Deformation (FFD) registration framework, which is one of the most widely
used and well-established non-rigid registration algorithm.
Medical image registration is a computationally expensive task because of the high degrees of freedom
of the non-rigid transformations. In this work, the FFD algorithm has been re-factored to enable
fast processing, while maintaining the accuracy of the results. In addition, parallel computing paradigms
have been employed to provide near real-time image registration capabilities. Further modifications have
been performed to improve the registration robustness to artifacts such as tissues non-uniformity. The
plausibility of the generated deformation field has been improved through the use of bio-mechanical
models based regularization. Additionally, diffeomorphic extensions to the algorithm were also developed.
The work presented in this thesis has been extensively validated using brain magnetic resonance
imaging of patients diagnosed with dementia or patients undergoing brain resection. It has also been
applied to lung X-ray computed tomography and imaging of small animals.
Alongside with this thesis an open-source package, NiftyReg, has been developed to release the
presented work to the medical imaging community