302 research outputs found
Deep Learning of Unified Region, Edge, and Contour Models for Automated Image Segmentation
Image segmentation is a fundamental and challenging problem in computer
vision with applications spanning multiple areas, such as medical imaging,
remote sensing, and autonomous vehicles. Recently, convolutional neural
networks (CNNs) have gained traction in the design of automated segmentation
pipelines. Although CNN-based models are adept at learning abstract features
from raw image data, their performance is dependent on the availability and
size of suitable training datasets. Additionally, these models are often unable
to capture the details of object boundaries and generalize poorly to unseen
classes. In this thesis, we devise novel methodologies that address these
issues and establish robust representation learning frameworks for
fully-automatic semantic segmentation in medical imaging and mainstream
computer vision. In particular, our contributions include (1) state-of-the-art
2D and 3D image segmentation networks for computer vision and medical image
analysis, (2) an end-to-end trainable image segmentation framework that unifies
CNNs and active contour models with learnable parameters for fast and robust
object delineation, (3) a novel approach for disentangling edge and texture
processing in segmentation networks, and (4) a novel few-shot learning model in
both supervised settings and semi-supervised settings where synergies between
latent and image spaces are leveraged to learn to segment images given limited
training data.Comment: PhD dissertation, UCLA, 202
Relational Modeling for Robust and Efficient Pulmonary Lobe Segmentation in CT Scans
Pulmonary lobe segmentation in computed tomography scans is essential for
regional assessment of pulmonary diseases. Recent works based on convolution
neural networks have achieved good performance for this task. However, they are
still limited in capturing structured relationships due to the nature of
convolution. The shape of the pulmonary lobes affect each other and their
borders relate to the appearance of other structures, such as vessels, airways,
and the pleural wall. We argue that such structural relationships play a
critical role in the accurate delineation of pulmonary lobes when the lungs are
affected by diseases such as COVID-19 or COPD.
In this paper, we propose a relational approach (RTSU-Net) that leverages
structured relationships by introducing a novel non-local neural network
module. The proposed module learns both visual and geometric relationships
among all convolution features to produce self-attention weights.
With a limited amount of training data available from COVID-19 subjects, we
initially train and validate RTSU-Net on a cohort of 5000 subjects from the
COPDGene study (4000 for training and 1000 for evaluation). Using models
pre-trained on COPDGene, we apply transfer learning to retrain and evaluate
RTSU-Net on 470 COVID-19 suspects (370 for retraining and 100 for evaluation).
Experimental results show that RTSU-Net outperforms three baselines and
performs robustly on cases with severe lung infection due to COVID-19
Recommended from our members
From Fully-Supervised, Single-Task to Scarcely-Supervised, Multi-Task Deep Learning for Medical Image Analysis
Image analysis based on machine learning has gained prominence with the advent of deep learning, particularly in medical imaging. To be effective in addressing challenging image analysis tasks, however, conventional deep neural networks require large corpora of annotated training data, which are unfortunately scarce in the medical domain, thus often rendering fully-supervised learning strategies ineffective.This thesis devises for use in a variety of medical image analysis applications a series of novel deep learning methods, ranging from fully-supervised, single-task learning to scarcely-supervised, multi-task learning that makes efficient use of annotated training data. Specifically, its main contributions include (1) fully-supervised, single-task learning for the segmentation of pulmonary lobes from chest CT scans and the analysis of scoliosis from spine X-ray images; (2) supervised, single-task, domain-generalized pulmonary segmentation in chest X-ray images and retinal vasculature segmentation in fundoscopic images; (3) largely-unsupervised, multiple-task learning via deep generative modeling for the joint synthesis and classification of medical image data; and (4) partly-supervised, multiple-task learning for the combined segmentation and classification of chest and spine X-ray images
Deep Learning for Automated Medical Image Analysis
Medical imaging is an essential tool in many areas of medical applications,
used for both diagnosis and treatment. However, reading medical images and
making diagnosis or treatment recommendations require specially trained medical
specialists. The current practice of reading medical images is labor-intensive,
time-consuming, costly, and error-prone. It would be more desirable to have a
computer-aided system that can automatically make diagnosis and treatment
recommendations. Recent advances in deep learning enable us to rethink the ways
of clinician diagnosis based on medical images. In this thesis, we will
introduce 1) mammograms for detecting breast cancers, the most frequently
diagnosed solid cancer for U.S. women, 2) lung CT images for detecting lung
cancers, the most frequently diagnosed malignant cancer, and 3) head and neck
CT images for automated delineation of organs at risk in radiotherapy. First,
we will show how to employ the adversarial concept to generate the hard
examples improving mammogram mass segmentation. Second, we will demonstrate how
to use the weakly labeled data for the mammogram breast cancer diagnosis by
efficiently design deep learning for multi-instance learning. Third, the thesis
will walk through DeepLung system which combines deep 3D ConvNets and GBM for
automated lung nodule detection and classification. Fourth, we will show how to
use weakly labeled data to improve existing lung nodule detection system by
integrating deep learning with a probabilistic graphic model. Lastly, we will
demonstrate the AnatomyNet which is thousands of times faster and more accurate
than previous methods on automated anatomy segmentation.Comment: PhD Thesi
Semi-Supervised Segmentation of Radiation-Induced Pulmonary Fibrosis from Lung CT Scans with Multi-Scale Guided Dense Attention
Computed Tomography (CT) plays an important role in monitoring
radiation-induced Pulmonary Fibrosis (PF), where accurate segmentation of the
PF lesions is highly desired for diagnosis and treatment follow-up. However,
the task is challenged by ambiguous boundary, irregular shape, various position
and size of the lesions, as well as the difficulty in acquiring a large set of
annotated volumetric images for training. To overcome these problems, we
propose a novel convolutional neural network called PF-Net and incorporate it
into a semi-supervised learning framework based on Iterative Confidence-based
Refinement And Weighting of pseudo Labels (I-CRAWL). Our PF-Net combines 2D and
3D convolutions to deal with CT volumes with large inter-slice spacing, and
uses multi-scale guided dense attention to segment complex PF lesions. For
semi-supervised learning, our I-CRAWL employs pixel-level uncertainty-based
confidence-aware refinement to improve the accuracy of pseudo labels of
unannotated images, and uses image-level uncertainty for confidence-based image
weighting to suppress low-quality pseudo labels in an iterative training
process. Extensive experiments with CT scans of Rhesus Macaques with
radiation-induced PF showed that: 1) PF-Net achieved higher segmentation
accuracy than existing 2D, 3D and 2.5D neural networks, and 2) I-CRAWL
outperformed state-of-the-art semi-supervised learning methods for the PF
lesion segmentation task. Our method has a potential to improve the diagnosis
of PF and clinical assessment of side effects of radiotherapy for lung cancers.Comment: 12 pages, 9 figures. Submitted to IEEE TM
- …