34 research outputs found
Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging
Many analyses of neuroimaging data involve studying one or more regions of
interest (ROIs) in a brain image. In order to do so, each ROI must first be
identified. Since every brain is unique, the location, size, and shape of each
ROI varies across subjects. Thus, each ROI in a brain image must either be
manually identified or (semi-) automatically delineated, a task referred to as
segmentation. Automatic segmentation often involves mapping a previously
manually segmented image to a new brain image and propagating the labels to
obtain an estimate of where each ROI is located in the new image. A more recent
approach to this problem is to propagate labels from multiple manually
segmented atlases and combine the results using a process known as label
fusion. To date, most label fusion algorithms either employ voting procedures
or impose prior structure and subsequently find the maximum a posteriori
estimator (i.e., the posterior mode) through optimization. We propose using a
fully Bayesian spatial regression model for label fusion that facilitates
direct incorporation of covariate information while making accessible the
entire posterior distribution. We discuss the implementation of our model via
Markov chain Monte Carlo and illustrate the procedure through both simulation
and application to segmentation of the hippocampus, an anatomical structure
known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
Objective Evaluation of Multiple Sclerosis Lesion Segmentation using a Data Management and Processing Infrastructure
We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning, …), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores
Disentangling Human Error from the Ground Truth in Segmentation of Medical Images
Recent years have seen increasing use of supervised learning methods for
segmentation tasks. However, the predictive performance of these algorithms
depends on the quality of labels. This problem is particularly pertinent in the
medical image domain, where both the annotation cost and inter-observer
variability are high. In a typical label acquisition process, different human
experts provide their estimates of the 'true' segmentation labels under the
influence of their own biases and competence levels. Treating these noisy
labels blindly as the ground truth limits the performance that automatic
segmentation algorithms can achieve. In this work, we present a method for
jointly learning, from purely noisy observations alone, the reliability of
individual annotators and the true segmentation label distributions, using two
coupled CNNs. The separation of the two is achieved by encouraging the
estimated annotators to be maximally unreliable while achieving high fidelity
with the noisy training data. We first define a toy segmentation dataset based
on MNIST and study the properties of the proposed algorithm. We then
demonstrate the utility of the method on three public medical imaging
segmentation datasets with simulated (when necessary) and real diverse
annotations: 1) MSLSC (multiple-sclerosis lesions); 2) BraTS (brain tumours);
3) LIDC-IDRI (lung abnormalities). In all cases, our method outperforms
competing methods and relevant baselines particularly in cases where the number
of annotations is small and the amount of disagreement is large. The
experiments also show strong ability to capture the complex spatial
characteristics of annotators' mistakes
Multi-Atlas Segmentation of Biomedical Images: A Survey
Abstract Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing
SoftSeg: Advantages of soft versus binary training for image segmentation
Most image segmentation algorithms are trained on binary masks formulated as
a classification task per pixel. However, in applications such as medical
imaging, this "black-and-white" approach is too constraining because the
contrast between two tissues is often ill-defined, i.e., the voxels located on
objects' edges contain a mixture of tissues. Consequently, assigning a single
"hard" label can result in a detrimental approximation. Instead, a soft
prediction containing non-binary values would overcome that limitation. We
introduce SoftSeg, a deep learning training approach that takes advantage of
soft ground truth labels, and is not bound to binary predictions. SoftSeg aims
at solving a regression instead of a classification problem. This is achieved
by using (i) no binarization after preprocessing and data augmentation, (ii) a
normalized ReLU final activation layer (instead of sigmoid), and (iii) a
regression loss function (instead of the traditional Dice loss). We assess the
impact of these three features on three open-source MRI segmentation datasets
from the spinal cord gray matter, the multiple sclerosis brain lesion, and the
multimodal brain tumor segmentation challenges. Across multiple
cross-validation iterations, SoftSeg outperformed the conventional approach,
leading to an increase in Dice score of 2.0% on the gray matter dataset
(p=0.001), 3.3% for the MS lesions, and 6.5% for the brain tumors. SoftSeg
produces consistent soft predictions at tissues' interfaces and shows an
increased sensitivity for small objects. The richness of soft labels could
represent the inter-expert variability, the partial volume effect, and
complement the model uncertainty estimation. The developed training pipeline
can easily be incorporated into most of the existing deep learning
architectures. It is already implemented in the freely-available deep learning
toolbox ivadomed (https://ivadomed.org)
A Pipeline for Automated Assessment of Cell Location in 3D Mouse Brain Image Sets
Mapping the neuronal connectivity of the mouse brain has long been hampered by the laborious and time-consuming process of slicing, staining and imaging the brain tissue. Recent developments in automated 3D fluorescence microscopy, such as serial two- photon tomography (STP) and light sheet fluorescence microscopy, now allow for automated rapid 3D imaging of a complete mouse brain at cellular resolution. In combination with transsynaptic viral tracers, this paves the way for high-throughput brain mapping studies, which could greatly advance our understanding of the function of the brain. Because transsynaptic tracers label synaptically connected cells, the analysis of these whole-brain scans requires detection of fluorescently labelled cells and anatomical segmentation of the data, which are very labour- and time-intensive manual tasks and prevent high-throughput analysis. This thesis presents and validates two software tools to automate anatomical segmentation and cell detection in serial two photon (STP) scans of the mouse brain. Automated mouse atlas propagation (aMAP) segments the scans into anatomical regions by matching a 3D reference atlas to the data using affine and free-form image registration. The fast automated cell counting tool (FACCT) then detects fluorescently labelled cells in the dataset with a novel approach of stepwise data reduction combined with object detection using artificial neuronal networks. The tools are optimised for large datasets and are capable of processing a 2.5TB STP scan in under two days. The performance of aMAP and FACCT is evaluated on STP scans from retrograde connectivity tracing experiments using rabies virus injections in the primary visual corte