761 research outputs found
Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks
A cascade of fully convolutional neural networks is proposed to segment
multi-modal Magnetic Resonance (MR) images with brain tumor into background and
three hierarchical regions: whole tumor, tumor core and enhancing tumor core.
The cascade is designed to decompose the multi-class segmentation problem into
a sequence of three binary segmentation problems according to the subregion
hierarchy. The whole tumor is segmented in the first step and the bounding box
of the result is used for the tumor core segmentation in the second step. The
enhancing tumor core is then segmented based on the bounding box of the tumor
core segmentation result. Our networks consist of multiple layers of
anisotropic and dilated convolution filters, and they are combined with
multi-view fusion to reduce false positives. Residual connections and
multi-scale predictions are employed in these networks to boost the
segmentation performance. Experiments with BraTS 2017 validation set show that
the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for
enhancing tumor core, whole tumor and tumor core, respectively. The
corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and
0.7748, respectively.Comment: 12 pages, 5 figures. MICCAI Brats Challenge 201
Generalized Wasserstein Dice Score, Distributionally Robust Deep Learning, and Ranger for brain tumor segmentation: BraTS 2020 challenge
Training a deep neural network is an optimization problem with four main
ingredients: the design of the deep neural network, the per-sample loss
function, the population loss function, and the optimizer. However, methods
developed to compete in recent BraTS challenges tend to focus only on the
design of deep neural network architectures, while paying less attention to the
three other aspects. In this paper, we experimented with adopting the opposite
approach. We stuck to a generic and state-of-the-art 3D U-Net architecture and
experimented with a non-standard per-sample loss function, the generalized
Wasserstein Dice loss, a non-standard population loss function, corresponding
to distributionally robust optimization, and a non-standard optimizer, Ranger.
Those variations were selected specifically for the problem of multi-class
brain tumor segmentation. The generalized Wasserstein Dice loss is a per-sample
loss function that allows taking advantage of the hierarchical structure of the
tumor regions labeled in BraTS. Distributionally robust optimization is a
generalization of empirical risk minimization that accounts for the presence of
underrepresented subdomains in the training dataset. Ranger is a generalization
of the widely used Adam optimizer that is more stable with small batch size and
noisy labels. We found that each of those variations of the optimization of
deep neural networks for brain tumor segmentation leads to improvements in
terms of Dice scores and Hausdorff distances. With an ensemble of three deep
neural networks trained with various optimization procedures, we achieved
promising results on the validation dataset of the BraTS 2020 challenge. Our
ensemble ranked fourth out of the 693 registered teams for the segmentation
task of the BraTS 2020 challenge.Comment: MICCAI 2020 BrainLes Workshop. Our method ranked fourth out of the
693 registered teams for the segmentation task of the BraTS 2020 challenge.
v2: Added some clarifications following reviewers' feedback (camera-ready
version
Disease Progression Modeling and Prediction through Random Effect Gaussian Processes and Time Transformation
The development of statistical approaches for the joint modelling of the
temporal changes of imaging, biochemical, and clinical biomarkers is of
paramount importance for improving the understanding of neurodegenerative
disorders, and for providing a reference for the prediction and quantification
of the pathology in unseen individuals. Nonetheless, the use of disease
progression models for probabilistic predictions still requires investigation,
for example for accounting for missing observations in clinical data, and for
accurate uncertainty quantification. We tackle this problem by proposing a
novel Gaussian process-based method for the joint modeling of imaging and
clinical biomarker progressions from time series of individual observations.
The model is formulated to account for individual random effects and time
reparameterization, allowing non-parametric estimates of the biomarker
evolution, as well as high flexibility in specifying correlation structure, and
time transformation models. Thanks to the Bayesian formulation, the model
naturally accounts for missing data, and allows for uncertainty quantification
in the estimate of evolutions, as well as for probabilistic prediction of
disease staging in unseen patients. The experimental results show that the
proposed model provides a biologically plausible description of the evolution
of Alzheimer's pathology across the whole disease time-span as well as
remarkable predictive performance when tested on a large clinical cohort with
missing observations.Comment: 13 pages, 2 figure
Part-to-whole Registration of Histology and MRI using Shape Elements
Image registration between histology and magnetic resonance imaging (MRI) is
a challenging task due to differences in structural content and contrast. Too
thick and wide specimens cannot be processed all at once and must be cut into
smaller pieces. This dramatically increases the complexity of the problem,
since each piece should be individually and manually pre-aligned. To the best
of our knowledge, no automatic method can reliably locate such piece of tissue
within its respective whole in the MRI slice, and align it without any prior
information. We propose here a novel automatic approach to the joint problem of
multimodal registration between histology and MRI, when only a fraction of
tissue is available from histology. The approach relies on the representation
of images using their level lines so as to reach contrast invariance. Shape
elements obtained via the extraction of bitangents are encoded in a
projective-invariant manner, which permits the identification of common pieces
of curves between two images. We evaluated the approach on human brain
histology and compared resulting alignments against manually annotated ground
truths. Considering the complexity of the brain folding patterns, preliminary
results are promising and suggest the use of characteristic and meaningful
shape elements for improved robustness and efficiency.Comment: Paper accepted at ICCV Workshop (Bio-Image Computing
Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning
In this work, we investigate laparoscopic camera motion automation through
imitation learning from retrospective videos of laparoscopic interventions. A
novel method is introduced that learns to augment a surgeon's behavior in image
space through object motion invariant image registration via homographies.
Contrary to existing approaches, no geometric assumptions are made and no depth
information is necessary, enabling immediate translation to a robotic setup.
Deviating from the dominant approach in the literature which consist of
following a surgical tool, we do not handcraft the objective and no priors are
imposed on the surgical scene, allowing the method to discover unbiased
policies. In this new research field, significant improvements are demonstrated
over two baselines on the Cholec80 and HeiChole datasets, showcasing an
improvement of 47% over camera motion continuation. The method is further shown
to indeed predict camera motion correctly on the public motion classification
labels of the AutoLaparo dataset. All code is made accessible on GitHub.Comment: Early accepted at MICCAI 202
Spatial calibration of a 2D/3D ultrasound using a tracked needle
PURPOSE: Spatial calibration between a 2D/3D ultrasound and a pose tracking system requires a complex and time-consuming procedure. Simplifying this procedure without compromising the calibration accuracy is still a challenging problem. METHOD: We propose a new calibration method for both 2D and 3D ultrasound probes that involves scanning an arbitrary region of a tracked needle in different poses. This approach is easier to perform than most alternative methods that require a precise alignment between US scans and a calibration phantom. RESULTS: Our calibration method provides an average accuracy of 2.49 mm for a 2D US probe with 107 mm scanning depth, and an average accuracy of 2.39 mm for a 3D US with 107 mm scanning depth. CONCLUSION: Our method proposes a unified calibration framework for 2D and 3D probes using the same phantom object, work-flow, and algorithm. Our method significantly improves the accuracy of needle-based methods for 2D US probes as well as extends its use for 3D US probes
- …