20 research outputs found
Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning
Multi-task neural network architectures provide a mechanism that jointly
integrates information from distinct sources. It is ideal in the context of
MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT)
scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic
multi-task network that estimates: 1) intrinsic uncertainty through a
heteroscedastic noise model for spatially-adaptive task loss weighting and 2)
parameter uncertainty through approximate Bayesian inference. This allows
sampling of multiple segmentations and synCTs that share their network
representation. We test our model on prostate cancer scans and show that it
produces more accurate and consistent synCTs with a better estimation in the
variance of the errors, state of the art results in OAR segmentation and a
methodology for quality assurance in radiotherapy treatment planning.Comment: Early-accept at MICCAI 2018, 8 pages, 4 figure
Differential rates of perinatal maturation of human primary and nonprimary auditory cortex
Abstract Primary and nonprimary cerebral cortex mature along different timescales; however, the differences between the rates of maturation of primary and nonprimary cortex are unclear. Cortical maturation can be measured through changes in tissue microstructure detectable by diffusion magnetic resonance imaging (MRI). In this study, diffusion tensor imaging (DTI) was used to characterize the maturation of Heschl’s gyrus (HG), which contains both primary auditory cortex (pAC) and nonprimary auditory cortex (nAC), in 90 preterm infants between 26 and 42 weeks postmenstrual age (PMA). The preterm infants were in different acoustical environments during their hospitalization: 46 in open ward beds and 44 in single rooms. A control group consisted of 15 term-born infants. Diffusion parameters revealed that (1) changes in cortical microstructure that accompany cortical maturation had largely already occurred in pAC by 28 weeks PMA, and (2) rapid changes were taking place in nAC between 26 and 42 weeks PMA. At term equivalent PMA, diffusion parameters for auditory cortex were different between preterm infants and term control infants, reflecting either delayed maturation or injury. No effect of room type was observed. For the preterm group, disturbed maturation of nonprimary (but not primary) auditory cortex was associated with poorer language performance at age two years
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
GraphCast: Learning skillful medium-range global weather forecasting
We introduce a machine-learning (ML)-based weather simulator--called
"GraphCast"--which outperforms the most accurate deterministic operational
medium-range weather forecasting system in the world, as well as all previous
ML baselines. GraphCast is an autoregressive model, based on graph neural
networks and a novel high-resolution multi-scale mesh representation, which we
trained on historical weather data from the European Centre for Medium-Range
Weather Forecasts (ECMWF)'s ERA5 reanalysis archive. It can make 10-day
forecasts, at 6-hour time intervals, of five surface variables and six
atmospheric variables, each at 37 vertical pressure levels, on a 0.25-degree
latitude-longitude grid, which corresponds to roughly 25 x 25 kilometer
resolution at the equator. Our results show GraphCast is more accurate than
ECMWF's deterministic operational forecasting system, HRES, on 90.0% of the
2760 variable and lead time combinations we evaluated. GraphCast also
outperforms the most accurate previous ML-based weather forecasting model on
99.2% of the 252 targets it reported. GraphCast can generate a 10-day forecast
(35 gigabytes of data) in under 60 seconds on Cloud TPU v4 hardware. Unlike
traditional forecasting methods, ML-based forecasting scales well with data: by
training on bigger, higher quality, and more recent data, the skill of the
forecasts can improve. Together these results represent a key step forward in
complementing and improving weather modeling with ML, open new opportunities
for fast, accurate forecasting, and help realize the promise of ML-based
simulation in the physical sciences.Comment: Main text: 21 pages, 8 figures, 1 table. Appendix: 15 pages, 5
figures, 2 table
Beyond the resolution limit:Diffusion parameter estimation in partial volume
Diffusion MRI is a frequently-used imaging modality that can infer microstructural properties of tissue,down to the scale of microns. For single-compartment models,such as the diffusion tensor (DT),the model interpretation depends on voxels having homogeneous composition. This limitation makes it difficult to measure diffusion parameters for small structures such as the fornix in the brain,because of partial volume. In this work,we use a segmentation from a structural scan to calculate the tissue composition for each diffusion voxel. We model the measured diffusion signal as a linear combination of signals from each of the tissues present in the voxel,and fit parameters on a per-region basis by optimising over all diffusion data simultaneously. We test the proposed method by using diffusion data from the Human Connectome Project (HCP). We down sample the HCP data,and show that our method returns parameter estimates that are closer to the higher solution ground truths than for classical methods. We show that our method allows accurate estimation of diffusion parameters for regions with partial volume. Finally,we apply the method to compare diffusion in the fornix for adults born extremely preterm and matched controls