3,408 research outputs found
Deep Learning of Unified Region, Edge, and Contour Models for Automated Image Segmentation
Image segmentation is a fundamental and challenging problem in computer
vision with applications spanning multiple areas, such as medical imaging,
remote sensing, and autonomous vehicles. Recently, convolutional neural
networks (CNNs) have gained traction in the design of automated segmentation
pipelines. Although CNN-based models are adept at learning abstract features
from raw image data, their performance is dependent on the availability and
size of suitable training datasets. Additionally, these models are often unable
to capture the details of object boundaries and generalize poorly to unseen
classes. In this thesis, we devise novel methodologies that address these
issues and establish robust representation learning frameworks for
fully-automatic semantic segmentation in medical imaging and mainstream
computer vision. In particular, our contributions include (1) state-of-the-art
2D and 3D image segmentation networks for computer vision and medical image
analysis, (2) an end-to-end trainable image segmentation framework that unifies
CNNs and active contour models with learnable parameters for fast and robust
object delineation, (3) a novel approach for disentangling edge and texture
processing in segmentation networks, and (4) a novel few-shot learning model in
both supervised settings and semi-supervised settings where synergies between
latent and image spaces are leveraged to learn to segment images given limited
training data.Comment: PhD dissertation, UCLA, 202
Exemplar Learning for Medical Image Segmentation
Medical image annotation typically requires expert knowledge and hence incurs
time-consuming and expensive data annotation costs. To reduce this burden, we
propose a novel learning scenario, Exemplar Learning (EL), to explore automated
learning processes for medical image segmentation from a single annotated image
example. This innovative learning task is particularly suitable for medical
image segmentation, where all categories of organs can be presented in one
single image for annotation all at once. To address this challenging EL task,
we propose an Exemplar Learning-based Synthesis Net (ELSNet) framework for
medical image segmentation that enables innovative exemplar-based data
synthesis, pixel-prototype based contrastive embedding learning, and
pseudo-label based exploitation of the unlabeled data. Specifically, ELSNet
introduces two new modules for image segmentation: an exemplar-guided synthesis
module, which enriches and diversifies the training set by synthesizing
annotated samples from the given exemplar, and a pixel-prototype based
contrastive embedding module, which enhances the discriminative capacity of the
base segmentation model via contrastive self-supervised learning. Moreover, we
deploy a two-stage process for segmentation model training, which exploits the
unlabeled data with predicted pseudo segmentation labels. To evaluate this new
learning framework, we conduct extensive experiments on several organ
segmentation datasets and present an in-depth analysis. The empirical results
show that the proposed exemplar learning framework produces effective
segmentation results
- …