23,639 research outputs found
Illumination coding meets uncertainty learning: toward reliable AI-augmented phase imaging
We propose a physics-assisted deep learning (DL) framework for large space-bandwidth product (SBP) phase imaging. We design an asymmetric coded illumination scheme to encode high-resolution phase information across a wide field-of-view. We then develop a matching DL algorithm to provide large-SBP phase estimation. We show that this illumination coding scheme is highly scalable in achieving flexible resolution, and robust to experimental variations. We demonstrate this technique on both static and dynamic biological samples, and show that it can reliably achieve 5X resolution enhancement across 4X FOVs using only five multiplexed measurements -- more than 10X data reduction over the state-of-the-art. Typical DL algorithms tend to provide over-confident predictions, whose errors are only discovered in hindsight. We develop an uncertainty learning framework to overcome this limitation and provide predictive assessment to the reliability of the DL prediction. We show that the predicted uncertainty maps can be used as a surrogate to the true error. We validate the robustness of our technique by analyzing the model uncertainty. We quantify the effect of noise, model errors, incomplete training data, and "out-of-distribution" testing data by assessing the data uncertainty. We further demonstrate that the predicted credibility maps allow identifying spatially and temporally rare biological events. Our technique enables scalable AI-augmented large-SBP phase imaging with dependable predictions.Published versio
Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology
Stain variation is a phenomenon observed when distinct pathology laboratories
stain tissue slides that exhibit similar but not identical color appearance.
Due to this color shift between laboratories, convolutional neural networks
(CNNs) trained with images from one lab often underperform on unseen images
from the other lab. Several techniques have been proposed to reduce the
generalization error, mainly grouped into two categories: stain color
augmentation and stain color normalization. The former simulates a wide variety
of realistic stain variations during training, producing stain-invariant CNNs.
The latter aims to match training and test color distributions in order to
reduce stain variation. For the first time, we compared some of these
techniques and quantified their effect on CNN classification performance using
a heterogeneous dataset of hematoxylin and eosin histopathology images from 4
organs and 9 pathology laboratories. Additionally, we propose a novel
unsupervised method to perform stain color normalization using a neural
network. Based on our experimental results, we provide practical guidelines on
how to use stain color augmentation and stain color normalization in future
computational pathology applications.Comment: Accepted in the Medical Image Analysis journa
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
Automatic Emphysema Detection using Weakly Labeled HRCT Lung Images
A method for automatically quantifying emphysema regions using
High-Resolution Computed Tomography (HRCT) scans of patients with chronic
obstructive pulmonary disease (COPD) that does not require manually annotated
scans for training is presented. HRCT scans of controls and of COPD patients
with diverse disease severity are acquired at two different centers. Textural
features from co-occurrence matrices and Gaussian filter banks are used to
characterize the lung parenchyma in the scans. Two robust versions of multiple
instance learning (MIL) classifiers, miSVM and MILES, are investigated. The
classifiers are trained with the weak labels extracted from the forced
expiratory volume in one minute (FEV) and diffusing capacity of the lungs
for carbon monoxide (DLCO). At test time, the classifiers output a patient
label indicating overall COPD diagnosis and local labels indicating the
presence of emphysema. The classifier performance is compared with manual
annotations by two radiologists, a classical density based method, and
pulmonary function tests (PFTs). The miSVM classifier performed better than
MILES on both patient and emphysema classification. The classifier has a
stronger correlation with PFT than the density based method, the percentage of
emphysema in the intersection of annotations from both radiologists, and the
percentage of emphysema annotated by one of the radiologists. The correlation
between the classifier and the PFT is only outperformed by the second
radiologist. The method is therefore promising for facilitating assessment of
emphysema and reducing inter-observer variability.Comment: Accepted at PLoS ON
- …