15 research outputs found
Recommended from our members
Learning under Distributed Weak Supervision
The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods. While typically established by a clinical expert rater, the increase in acquired imaging data renders traditional pixel-wise segmentations less feasible. In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters. The crowd annotations are subsequently used for training a fully convolutional neural network to address the problem of fetal brain segmentation in T2-weighted MR images. Using this approach we report encouraging results compared to highly targeted, fully supervised methods and potentially address a frequent problem impeding image analysis research
Fully Convolutional Slice-to-Volume Reconstruction for Single-Stack MRI
In magnetic resonance imaging (MRI), slice-to-volume reconstruction (SVR)
refers to computational reconstruction of an unknown 3D magnetic resonance
volume from stacks of 2D slices corrupted by motion. While promising, current
SVR methods require multiple slice stacks for accurate 3D reconstruction,
leading to long scans and limiting their use in time-sensitive applications
such as fetal fMRI. Here, we propose a SVR method that overcomes the
shortcomings of previous work and produces state-of-the-art reconstructions in
the presence of extreme inter-slice motion. Inspired by the recent success of
single-view depth estimation methods, we formulate SVR as a single-stack motion
estimation task and train a fully convolutional network to predict a motion
stack for a given slice stack, producing a 3D reconstruction as a byproduct of
the predicted motion. Extensive experiments on the SVR of adult and fetal
brains demonstrate that our fully convolutional method is twice as accurate as
previous SVR methods. Our code is available at github.com/seannz/svr.Comment: Accepted to CVPR 202
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks
Despite the state-of-the-art performance for medical image segmentation, deep
convolutional neural networks (CNNs) have rarely provided uncertainty
estimations regarding their segmentation outputs, e.g., model (epistemic) and
image-based (aleatoric) uncertainties. In this work, we analyze these different
types of uncertainties for CNN-based 2D and 3D medical image segmentation
tasks. We additionally propose a test-time augmentation-based aleatoric
uncertainty to analyze the effect of different transformations of the input
image on the segmentation output. Test-time augmentation has been previously
used to improve segmentation accuracy, yet not been formulated in a consistent
mathematical framework. Hence, we also propose a theoretical formulation of
test-time augmentation, where a distribution of the prediction is estimated by
Monte Carlo simulation with prior distributions of parameters in an image
acquisition model that involves image transformations and noise. We compare and
combine our proposed aleatoric uncertainty with model uncertainty. Experiments
with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic
Resonance Images (MRI) showed that 1) the test-time augmentation-based
aleatoric uncertainty provides a better uncertainty estimation than calculating
the test-time dropout-based model uncertainty alone and helps to reduce
overconfident incorrect predictions, and 2) our test-time augmentation
outperforms a single-prediction baseline and dropout-based multiple
predictions.Comment: 13 pages, 8 figures, accepted by NeuroComputin
Low Budget Active Learning via Wasserstein Distance: An Integer Programming Approach
Given restrictions on the availability of data, active learning is the
process of training a model with limited labeled data by selecting a core
subset of an unlabeled data pool to label. Although selecting the most useful
points for training is an optimization problem, the scale of deep learning data
sets forces most selection strategies to employ efficient heuristics. Instead,
we propose a new integer optimization problem for selecting a core set that
minimizes the discrete Wasserstein distance from the unlabeled pool. We
demonstrate that this problem can be tractably solved with a Generalized
Benders Decomposition algorithm. Our strategy requires high-quality latent
features which we obtain by unsupervised learning on the unlabeled pool.
Numerical results on several data sets show that our optimization approach is
competitive with baselines and particularly outperforms them in the low budget
regime where less than one percent of the data set is labeled
Constrained-CNN losses for weakly supervised segmentation
The final publication is available at Elsevier via https://doi.org/10.1016/j.media.2019.02.009. © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license
http://creativecommons.org/licenses/by-nc-nd/4.0/Weakly-supervised learning based on, e.g., partially labelled images or image-tags, is currently attracting significant attention in CNN segmentation as it can mitigate the need for full and laborious pixel/voxel annotations. Enforcing high-order (global) inequality constraints on the network output (for instance, to constrain the size of the target region) can leverage unlabeled data, guiding the training process with domain-specific knowledge. Inequality constraints are very flexible because they do not assume exact prior knowledge. However, constrained Lagrangian dual optimization has been largely avoided in deep networks, mainly for computational tractability reasons. To the best of our knowledge, the method of Pathak et al. (2015a) is the only prior work that addresses deep CNNs with linear constraints in weakly supervised segmentation. It uses the constraints to synthesize fully-labeled training masks (proposals) from weak labels, mimicking full supervision and facilitating dual optimization.
We propose to introduce a differentiable penalty, which enforces inequality constraints directly in the loss function, avoiding expensive Lagrangian dual iterates and proposal generation. From constrained-optimization perspective, our simple penalty-based approach is not optimal as there is no guarantee that the constraints are satisfied. However, surprisingly, it yields substantially better results than the Lagrangian-based constrained CNNs in Pathak et al. (2015a), while reducing the computational demand for training. By annotating only a small fraction of the pixels, the proposed approach can reach a level of segmentation performance that is comparable to full supervision on three separate tasks. While our experiments focused on basic linear constraints such as the target-region size and image tags, our framework can be easily extended to other non-linear constraints, e.g., invariant shape moments (Klodt and Cremers, 2011) and other region statistics (Lim et al., 2014). Therefore, it has the potential to close the gap between weakly and fully supervised learning in semantic medical image segmentation. Our code is publicly available.This work is supported by the National Science and Engineering Research Council of Canada (NSERC), discovery grant program, and by the ETS Research Chair on Artificial Intelligence in Medical Imagin