546 research outputs found
Error Corrective Boosting for Learning Fully Convolutional Networks with Limited Data
Training deep fully convolutional neural networks (F-CNNs) for semantic image
segmentation requires access to abundant labeled data. While large datasets of
unlabeled image data are available in medical applications, access to manually
labeled data is very limited. We propose to automatically create auxiliary
labels on initially unlabeled data with existing tools and to use them for
pre-training. For the subsequent fine-tuning of the network with manually
labeled data, we introduce error corrective boosting (ECB), which emphasizes
parameter updates on classes with lower accuracy. Furthermore, we introduce
SkipDeconv-Net (SD-Net), a new F-CNN architecture for brain segmentation that
combines skip connections with the unpooling strategy for upsampling. The
SD-Net addresses challenges of severe class imbalance and errors along
boundaries. With application to whole-brain MRI T1 scan segmentation, we
generate auxiliary labels on a large dataset with FreeSurfer and fine-tune on
two datasets with manual annotations. Our results show that the inclusion of
auxiliary labels and ECB yields significant improvements. SD-Net segments a 3D
scan in 7 secs in comparison to 30 hours for the closest multi-atlas
segmentation method, while reaching similar performance. It also outperforms
the latest state-of-the-art F-CNN models.Comment: Accepted at MICCAI 201
Learning to segment when experts disagree
Recent years have seen an increasing use of supervised learning methods for segmentation tasks. However, the predictive performance of these algorithms depend on the quality of labels, especially in medical image domain, where both the annotation cost and inter-observer variability are high. In a typical annotation collection process, different clinical experts provide their estimates of the “true” segmentation labels under the influence of their levels of expertise and biases. Treating these noisy labels blindly as the ground truth can adversely affect the performance of supervised segmentation models. In this work, we present a neural network architecture for jointly learning, from noisy observations alone, both the reliability of individual annotators and the true segmentation label distributions. The separation of the annotators’ characteristics and true segmentation label is achieved by encouraging the estimated annotators to be maximally unreliable while achieving high fidelity with the training data. Our method can also be viewed as a translation of STAPLE, an established label aggregation framework proposed in Warfield et al. [1] to the supervised learning paradigm. We demonstrate first on a generic segmentation task using MNIST data and then adapt for usage with MRI scans of multiple sclerosis (MS) patients for lesion labelling. Our method shows considerable improvement over the relevant baselines on both datasets in terms of segmentation accuracy and estimation of annotator reliability, particularly when only a single label is available per image. An open-source implementation of our approach can be found at https://github.com/UCLBrain/MSLS
Robust Fusion of Probability Maps
International audienceThe fusion of probability maps is required when trying to analyse a collection of image labels or probability maps produced by several segmentation algorithms or human raters. The challenge is to weight properly the combination of maps in order to reflect the agreement among raters, the presence of outliers and the spatial uncertainty in the consensus. In this paper, we address several shortcomings of prior work in continuous label fusion. We introduce a novel approach to jointly estimate a reliable consensus map and assess the production of outliers and the confidence in each rater. Our probabilistic model is based on Student's t-distributions allowing local estimates of raters' performances. The introduction of bias and spatial priors leads to proper rater bias estimates and a control over the smoothness of the consensus map. Image intensity information is incorporated by geodesic distance transform for binary masks. Finally, we propose an approach to cluster raters based on variational boosting thus producing possibly several alternative consensus maps. Our approach was successfully tested on the MICCAI 2016 MS lesions dataset, on MR prostate delineations and on deep learning based segmentation predictions of lung nodules from the LIDC dataset
Pulse Sequence Resilient Fast Brain Segmentation
Accurate automatic segmentation of brain anatomy from
-weighted~(-w) magnetic resonance images~(MRI) has been a
computationally intensive bottleneck in neuroimaging pipelines, with
state-of-the-art results obtained by unsupervised intensity modeling-based
methods and multi-atlas registration and label fusion. With the advent of
powerful supervised convolutional neural networks~(CNN)-based learning
algorithms, it is now possible to produce a high quality brain segmentation
within seconds. However, the very supervised nature of these methods makes it
difficult to generalize them on data different from what they have been trained
on. Modern neuroimaging studies are necessarily multi-center initiatives with a
wide variety of acquisition protocols. Despite stringent protocol harmonization
practices, it is not possible to standardize the whole gamut of MRI imaging
parameters across scanners, field strengths, receive coils etc., that affect
image contrast. In this paper we propose a CNN-based segmentation algorithm
that, in addition to being highly accurate and fast, is also resilient to
variation in the input -w acquisition. Our approach relies on building
approximate forward models of -w pulse sequences that produce a typical
test image. We use the forward models to augment the training data with test
data specific training examples. These augmented data can be used to update
and/or build a more robust segmentation model that is more attuned to the test
data imaging properties. Our method generates highly accurate, state-of-the-art
segmentation results~(overall Dice overlap=0.94), within seconds and is
consistent across a wide-range of protocols.Comment: Accepted at MICCAI 201
Optic disc classification by the Heidelberg Retina Tomograph and by physicians with varying experience of glaucoma
PurposeTo compare the diagnostic accuracy of the Heidelberg Retina Tomograph's (HRT) Moorfields regression analysis (MRA) and glaucoma probability score (GPS) with that of subjective grading of optic disc photographs performed by ophthalmologists with varying experience of glaucoma and by ophthalmology residents.MethodsDigitized disc photographs and HRT images from 97 glaucoma patients with visual field defects and 138 healthy individuals were classified as either within normal limits (WNL), borderline (BL), or outside normal limits (ONL). Sensitivity and specificity were compared for MRA, GPS, and the physicians. Analyses were also made according to disc size and for advanced visual field loss.ResultsForty-five physicians participated. When BL results were regarded as normal, sensitivity was significantly higher (P<5%) for both MRA and GPS compared with the average physician, 87%, 79%, and 62%, respectively. Specificity ranged from 86% for MRA to 97% for general ophthalmologists, but the differences were not significant. In eyes with small discs, sensitivity was 75% for MRA, 60% for the average doctor, and 25% for GPS; in eyes with large discs, sensitivity was 100% for both GPS and MRA, but only 68% for physicians.ConclusionOur results suggest that sensitivity of MRA is superior to that of the average physician, but not that of glaucoma experts. MRA correctly classified all eyes with advanced glaucoma and showed the best sensitivity in eyes with small optic discs
Measurement and Interpretation of Fermion-Pair Production at LEP energies above the Z Resonance
This paper presents DELPHI measurements and interpretations of
cross-sections, forward-backward asymmetries, and angular distributions, for
the e+e- -> ffbar process for centre-of-mass energies above the Z resonance,
from sqrt(s) ~ 130 - 207 GeV at the LEP collider. The measurements are
consistent with the predictions of the Standard Model and are used to study a
variety of models including the S-Matrix ansatz for e+e- -> ffbar scattering
and several models which include physics beyond the Standard Model: the
exchange of Z' bosons, contact interactions between fermions, the exchange of
gravitons in large extra dimensions and the exchange of sneutrino in R-parity
violating supersymmetry.Comment: 79 pages, 16 figures, Accepted by Eur. Phys. J.
A Determination of the Centre-of-Mass Energy at LEP2 using Radiative 2-fermion Events
Using e+e- -> mu+mu-(gamma) and e+e- -> qqbar(gamma) events radiative to the
Z pole, DELPHI has determined the centre-of-mass energy, sqrt{s}, using energy
and momentum constraint methods. The results are expressed as deviations from
the nominal LEP centre-of-mass energy, measured using other techniques. The
results are found to be compatible with the LEP Energy Working Group estimates
for a combination of the 1997 to 2000 data sets.Comment: 20 pages, 6 figures, Accepted by Eur. Phys. J.
A Measurement of the Tau Hadronic Branching Ratios
The exclusive and semi-exclusive branching ratios of the tau lepton hadronic
decay modes (h- v_t, h- pi0 v_t, h- pi0 pi0 v_t, h- \geq 2pi0 v_t, h- \geq 3pi0
v_t, 2h- h+ v_t, 2h- h+ pi0 v_t, 2h- h+ \geq 2pi0 v_t, 3h- 2h+ v_t and 3h- 2h+
\geq 1pi0 v_t) were measured with data from the DELPHI detector at LEP.Comment: 53 pages, 18 figures, Accepted by Eur. Phys. J.
- …