360 research outputs found
Foreseeing the future of mutualistic communities beyond collapse
International audienc
Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE
Probabilistic modelling has been an essential tool in medical image analysis,
especially for analyzing brain Magnetic Resonance Images (MRI). Recent deep
learning techniques for estimating high-dimensional distributions, in
particular Variational Autoencoders (VAEs), opened up new avenues for
probabilistic modeling. Modelling of volumetric data has remained a challenge,
however, because constraints on available computation and training data make it
difficult effectively leverage VAEs, which are well-developed for 2D images. We
propose a method to model 3D MR brain volumes distribution by combining a 2D
slice VAE with a Gaussian model that captures the relationships between slices.
We do so by estimating the sample mean and covariance in the latent space of
the 2D model over the slice direction. This combined model lets us sample new
coherent stacks of latent variables to decode into slices of a volume. We also
introduce a novel evaluation method for generated volumes that quantifies how
well their segmentations match those of true brain anatomy. We demonstrate that
our proposed model is competitive in generating high quality volumes at high
resolutions according to both traditional metrics and our proposed evaluation.Comment: accepted for publication at MICCAI 2020. Code available
https://github.com/voanna/slices-to-3d-brain-vae
How models can support ecosystem-based management of coral reefs
Despite the importance of coral reef ecosystems to the social and economic welfare of coastal communities, the condition of these marine ecosystems have generally degraded over the past decades. With an increased knowledge of coral reef ecosystem processes and a rise in computer power, dynamic models are useful tools in assessing the synergistic effects of local and global stressors on ecosystem functions. We review representative approaches for dynamically modeling coral reef ecosystems and categorize them as minimal, intermediate and complex models. The categorization was based on the leading principle for model development and their level of realism and process detail. This review aims to improve the knowledge of concurrent approaches in coral reef ecosystem modeling and highlights the importance of choosing an appropriate approach based on the type of question(s) to be answered. We contend that minimal and intermediate models are generally valuable tools to assess the response of key states to main stressors and, hence, contribute to understanding ecological surprises. As has been shown in freshwater resources management, insight into these conceptual relations profoundly influences how natural resource managers perceive their systems and how they manage ecosystem recovery. We argue that adaptive resource management requires integrated thinking and decision support, which demands a diversity of modeling approaches. Integration can be achieved through complimentary use of models or through integrated models that systemically combine all relevant aspects in one model. Such whole-of-system models can be useful tools for quantitatively evaluating scenarios. These models allow an assessment of the interactive effects of multiple stressors on various, potentially conflicting, management objectives. All models simplify reality and, as such, have their weaknesses. While minimal models lack multidimensionality, system models are likely difficult to interpret as they require many efforts to decipher the numerous interactions and feedback loops. Given the breadth of questions to be tackled when dealing with coral reefs, the best practice approach uses multiple model types and thus benefits from the strength of different models types
A Longitudinal Method for Simultaneous Whole-Brain and Lesion Segmentation in Multiple Sclerosis
In this paper we propose a novel method for the segmentation of longitudinal
brain MRI scans of patients suffering from Multiple Sclerosis. The method
builds upon an existing cross-sectional method for simultaneous whole-brain and
lesion segmentation, introducing subject-specific latent variables to encourage
temporal consistency between longitudinal scans. It is very generally
applicable, as it does not make any prior assumptions on the scanner, the MRI
protocol, or the number and timing of longitudinal follow-up scans. Preliminary
experiments on three longitudinal datasets indicate that the proposed method
produces more reliable segmentations and detects disease effects better than
the cross-sectional method it is based upon
A Modality-Adaptive Method for Segmenting Brain Tumors and Organs-at-Risk in Radiation Therapy Planning
In this paper we present a method for simultaneously segmenting brain tumors
and an extensive set of organs-at-risk for radiation therapy planning of
glioblastomas. The method combines a contrast-adaptive generative model for
whole-brain segmentation with a new spatial regularization model of tumor shape
using convolutional restricted Boltzmann machines. We demonstrate
experimentally that the method is able to adapt to image acquisitions that
differ substantially from any available training data, ensuring its
applicability across treatment sites; that its tumor segmentation accuracy is
comparable to that of the current state of the art; and that it captures most
organs-at-risk sufficiently well for radiation therapy planning purposes. The
proposed method may be a valuable step towards automating the delineation of
brain tumors and organs-at-risk in glioblastoma patients undergoing radiation
therapy.Comment: corrected one referenc
Partial Volume Segmentation of Brain MRI Scans of any Resolution and Contrast
Partial voluming (PV) is arguably the last crucial unsolved problem in
Bayesian segmentation of brain MRI with probabilistic atlases. PV occurs when
voxels contain multiple tissue classes, giving rise to image intensities that
may not be representative of any one of the underlying classes. PV is
particularly problematic for segmentation when there is a large resolution gap
between the atlas and the test scan, e.g., when segmenting clinical scans with
thick slices, or when using a high-resolution atlas. In this work, we present
PV-SynthSeg, a convolutional neural network (CNN) that tackles this problem by
directly learning a mapping between (possibly multi-modal) low resolution (LR)
scans and underlying high resolution (HR) segmentations. PV-SynthSeg simulates
LR images from HR label maps with a generative model of PV, and can be trained
to segment scans of any desired target contrast and resolution, even for
previously unseen modalities where neither images nor segmentations are
available at training. PV-SynthSeg does not require any preprocessing, and runs
in seconds. We demonstrate the accuracy and flexibility of the method with
extensive experiments on three datasets and 2,680 scans. The code is available
at https://github.com/BBillot/SynthSeg.Comment: accepted for MICCAI 202
Validating module network learning algorithms using simulated data
In recent years, several authors have used probabilistic graphical models to
learn expression modules and their regulatory programs from gene expression
data. Here, we demonstrate the use of the synthetic data generator SynTReN for
the purpose of testing and comparing module network learning algorithms. We
introduce a software package for learning module networks, called LeMoNe, which
incorporates a novel strategy for learning regulatory programs. Novelties
include the use of a bottom-up Bayesian hierarchical clustering to construct
the regulatory programs, and the use of a conditional entropy measure to assign
regulators to the regulation program nodes. Using SynTReN data, we test the
performance of LeMoNe in a completely controlled situation and assess the
effect of the methodological changes we made with respect to an existing
software package, namely Genomica. Additionally, we assess the effect of
various parameters, such as the size of the data set and the amount of noise,
on the inference performance. Overall, application of Genomica and LeMoNe to
simulated data sets gave comparable results. However, LeMoNe offers some
advantages, one of them being that the learning process is considerably faster
for larger data sets. Additionally, we show that the location of the regulators
in the LeMoNe regulation programs and their conditional entropy may be used to
prioritize regulators for functional validation, and that the combination of
the bottom-up clustering strategy with the conditional entropy-based assignment
of regulators improves the handling of missing or hidden regulators.Comment: 13 pages, 6 figures + 2 pages, 2 figures supplementary informatio
Nonlinear Markov Random Fields Learned via Backpropagation
Although convolutional neural networks (CNNs) currently dominate competitions
on image segmentation, for neuroimaging analysis tasks, more classical
generative approaches based on mixture models are still used in practice to
parcellate brains. To bridge the gap between the two, in this paper we propose
a marriage between a probabilistic generative model, which has been shown to be
robust to variability among magnetic resonance (MR) images acquired via
different imaging protocols, and a CNN. The link is in the prior distribution
over the unknown tissue classes, which are classically modelled using a Markov
random field. In this work we model the interactions among neighbouring pixels
by a type of recurrent CNN, which can encode more complex spatial interactions.
We validate our proposed model on publicly available MR data, from different
centres, and show that it generalises across imaging protocols. This result
demonstrates a successful and principled inclusion of a CNN in a generative
model, which in turn could be adapted by any probabilistic generative approach
for image segmentation.Comment: Accepted for the international conference on Information Processing
in Medical Imaging (IPMI) 2019, camera ready versio
- âŠ