27 research outputs found
Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
We consider the problem of segmenting a biomedical image into anatomical
regions of interest. We specifically address the frequent scenario where we
have no paired training data that contains images and their manual
segmentations. Instead, we employ unpaired segmentation images to build an
anatomical prior. Critically these segmentations can be derived from imaging
data from a different dataset and imaging modality than the current task. We
introduce a generative probabilistic model that employs the learned prior
through a convolutional neural network to compute segmentations in an
unsupervised setting. We conducted an empirical analysis of the proposed
approach in the context of structural brain MRI segmentation, using a
multi-study dataset of more than 14,000 scans. Our results show that an
anatomical prior can enable fast unsupervised segmentation which is typically
not possible using standard convolutional networks. The integration of
anatomical priors can facilitate CNN-based anatomical segmentation in a range
of novel clinical problems, where few or no annotations are available and thus
standard networks are not trainable. The code is freely available at
http://github.com/adalca/neuron.Comment: Presented at CVPR 2018. IEEE CVPR proceedings pp. 9290-929
Task-driven Prompt Evolution for Foundation Models
Promptable foundation models, particularly Segment Anything Model (SAM), have
emerged as a promising alternative to the traditional task-specific supervised
learning for image segmentation. However, many evaluation studies have found
that their performance on medical imaging modalities to be underwhelming
compared to conventional deep learning methods. In the world of large
pre-trained language and vision-language models, learning prompt from
downstream tasks has achieved considerable success in improving performance. In
this work, we propose a plug-and-play Prompt Optimization Technique for
foundation models like SAM (SAMPOT) that utilizes the downstream segmentation
task to optimize the human-provided prompt to obtain improved performance. We
demonstrate the utility of SAMPOT on lung segmentation in chest X-ray images
and obtain an improvement on a significant number of cases () over
human-provided initial prompts. We hope this work will lead to further
investigations in the nascent field of automatic visual prompt-tuning
Segmentation of Retinal Low-Cost Optical Coherence Tomography Images using Deep Learning
The treatment of age-related macular degeneration (AMD) requires continuous
eye exams using optical coherence tomography (OCT). The need for treatment is
determined by the presence or change of disease-specific OCT-based biomarkers.
Therefore, the monitoring frequency has a significant influence on the success
of AMD therapy. However, the monitoring frequency of current treatment schemes
is not individually adapted to the patient and therefore often insufficient.
While a higher monitoring frequency would have a positive effect on the success
of treatment, in practice it can only be achieved with a home monitoring
solution. One of the key requirements of a home monitoring OCT system is a
computer-aided diagnosis to automatically detect and quantify pathological
changes using specific OCT-based biomarkers. In this paper, for the first time,
retinal scans of a novel self-examination low-cost full-field OCT (SELF-OCT)
are segmented using a deep learning-based approach. A convolutional neural
network (CNN) is utilized to segment the total retina as well as pigment
epithelial detachments (PED). It is shown that the CNN-based approach can
segment the retina with high accuracy, whereas the segmentation of the PED
proves to be challenging. In addition, a convolutional denoising autoencoder
(CDAE) refines the CNN prediction, which has previously learned retinal shape
information. It is shown that the CDAE refinement can correct segmentation
errors caused by artifacts in the OCT image.Comment: Accepted for SPIE Medical Imaging 2020: Computer-Aided Diagnosi
Learning to segment clustered amoeboid cells from brightfield microscopy via multi-task learning with adaptive weight selection
Detecting and segmenting individual cells from microscopy images is critical
to various life science applications. Traditional cell segmentation tools are
often ill-suited for applications in brightfield microscopy due to poor
contrast and intensity heterogeneity, and only a small subset are applicable to
segment cells in a cluster. In this regard, we introduce a novel supervised
technique for cell segmentation in a multi-task learning paradigm. A
combination of a multi-task loss, based on the region and cell boundary
detection, is employed for an improved prediction efficiency of the network.
The learning problem is posed in a novel min-max framework which enables
adaptive estimation of the hyper-parameters in an automatic fashion. The region
and cell boundary predictions are combined via morphological operations and
active contour model to segment individual cells.
The proposed methodology is particularly suited to segment touching cells
from brightfield microscopy images without manual interventions.
Quantitatively, we observe an overall Dice score of 0.93 on the validation set,
which is an improvement of over 15.9% on a recent unsupervised method, and
outperforms the popular supervised U-net algorithm by at least on
average