516 research outputs found
Diffeomorphic Demons using Normalised Mutual Information, Evaluation on Multi-Modal Brain MR Images
The demons algorithm is a fast non-parametric non-rigid registration method. In recent years great efforts have been made to improve the approach; the state of the art version yields symmetric inverse-consistent large-deformation diffeomorphisms. However, only limited work has explored inter-modal similarity metrics, with no practical evaluation on multi-modality data. We present a diffeomorphic demons implementation using the analytical gradient of Normalised Mutual Information (NMI) in a conjugate gradient optimiser. We report the first qualitative and quantitative assessment of the demons for inter-modal registration. Experiments to spatially normalise real MR images, and to recover simulated deformation fields, demonstrate (i) similar accuracy from NMI-demons and classical demons when the latter may be used, and (ii) similar accuracy for NMI-demons on T1w-T1w and T1w-T2w registration, demonstrating its potential in multi-modal scenarios
Diffeomorphic demons using normalized mutual information, evaluation on multimodal brain MR images
The demons algorithm is a fast non-parametric non-rigid registration method. In recent years great efforts have been made to improve the approach; the state of the art version yields symmetric inverse-consistent largedeformation diffeomorphisms. However, only limited work has explored inter-modal similarity metrics, with no practical evaluation on multi-modality data. We present a diffeomorphic demons implementation using the analytical gradient of Normalised Mutual Information (NMI) in a conjugate gradient optimiser. We report the first qualitative and quantitative assessment of the demons for inter-modal registration. Experiments to spatially normalise real MR images, and to recover simulated deformation fields, demonstrate (i) similar accuracy from NMI-demons and classical demons when the latter may be used, and (ii) similar accuracy for NMI-demons on T1w-T1w and T1w-T2w registration, demonstrating its potential in multi-modal scenarios
Training recurrent neural networks robust to incomplete data: application to Alzheimer's disease progression modeling
Disease progression modeling (DPM) using longitudinal data is a challenging
machine learning task. Existing DPM algorithms neglect temporal dependencies
among measurements, make parametric assumptions about biomarker trajectories,
do not model multiple biomarkers jointly, and need an alignment of subjects'
trajectories. In this paper, recurrent neural networks (RNNs) are utilized to
address these issues. However, in many cases, longitudinal cohorts contain
incomplete data, which hinders the application of standard RNNs and requires a
pre-processing step such as imputation of the missing values. Instead, we
propose a generalized training rule for the most widely used RNN architecture,
long short-term memory (LSTM) networks, that can handle both missing predictor
and target values. The proposed LSTM algorithm is applied to model the
progression of Alzheimer's disease (AD) using six volumetric magnetic resonance
imaging (MRI) biomarkers, i.e., volumes of ventricles, hippocampus, whole
brain, fusiform, middle temporal gyrus, and entorhinal cortex, and it is
compared to standard LSTM networks with data imputation and a parametric,
regression-based DPM method. The results show that the proposed algorithm
achieves a significantly lower mean absolute error (MAE) than the alternatives
with p < 0.05 using Wilcoxon signed rank test in predicting values of almost
all of the MRI biomarkers. Moreover, a linear discriminant analysis (LDA)
classifier applied to the predicted biomarker values produces a significantly
larger AUC of 0.90 vs. at most 0.84 with p < 0.001 using McNemar's test for
clinical diagnosis of AD. Inspection of MAE curves as a function of the amount
of missing data reveals that the proposed LSTM algorithm achieves the best
performance up until more than 74% missing values. Finally, it is illustrated
how the method can successfully be applied to data with varying time intervals.Comment: arXiv admin note: substantial text overlap with arXiv:1808.0550
Simultaneous synthesis of FLAIR and segmentation of white matter hypointensities from T1 MRIs
Segmenting vascular pathologies such as white matter lesions in Brain
magnetic resonance images (MRIs) require acquisition of multiple sequences such
as T1-weighted (T1-w) --on which lesions appear hypointense-- and fluid
attenuated inversion recovery (FLAIR) sequence --where lesions appear
hyperintense--. However, most of the existing retrospective datasets do not
consist of FLAIR sequences. Existing missing modality imputation methods
separate the process of imputation, and the process of segmentation. In this
paper, we propose a method to link both modality imputation and segmentation
using convolutional neural networks. We show that by jointly optimizing the
imputation network and the segmentation network, the method not only produces
more realistic synthetic FLAIR images from T1-w images, but also improves the
segmentation of WMH from T1-w images only.Comment: Conference on Medical Imaging with Deep Learning MIDL 201
Robust training of recurrent neural networks to handle missing data for disease progression modeling
Disease progression modeling (DPM) using longitudinal data is a challenging
task in machine learning for healthcare that can provide clinicians with better
tools for diagnosis and monitoring of disease. Existing DPM algorithms neglect
temporal dependencies among measurements and make parametric assumptions about
biomarker trajectories. In addition, they do not model multiple biomarkers
jointly and need to align subjects' trajectories. In this paper, recurrent
neural networks (RNNs) are utilized to address these issues. However, in many
cases, longitudinal cohorts contain incomplete data, which hinders the
application of standard RNNs and requires a pre-processing step such as
imputation of the missing values. We, therefore, propose a generalized training
rule for the most widely used RNN architecture, long short-term memory (LSTM)
networks, that can handle missing values in both target and predictor
variables. This algorithm is applied for modeling the progression of
Alzheimer's disease (AD) using magnetic resonance imaging (MRI) biomarkers. The
results show that the proposed LSTM algorithm achieves a lower mean absolute
error for prediction of measurements across all considered MRI biomarkers
compared to using standard LSTM networks with data imputation or using a
regression-based DPM method. Moreover, applying linear discriminant analysis to
the biomarkers' values predicted by the proposed algorithm results in a larger
area under the receiver operating characteristic curve (AUC) for clinical
diagnosis of AD compared to the same alternatives, and the AUC is comparable to
state-of-the-art AUCs from a recent cross-sectional medical image
classification challenge. This paper shows that built-in handling of missing
values in LSTM network training paves the way for application of RNNs in
disease progression modeling.Comment: 9 pages, 1 figure, MIDL conferenc
A Multi-Path Approach to Histology Volume Reconstruction
This paper presents a method for correcting erratic pairwise registrations when reconstructing a volume from 2D histology slices. Due to complex and unpredictable alterations of the content of histology images, a pairwise rigid registration between two adjacent slices may fail systematically. Conversely, a neighbouring registration, which potentially involves one of these two slices, will work. This grounds our approach: using correct spatial correspondences established through neighbouring registrations to account for direct failures. We propose to search the best alignment of every couple of adjacent slices from a finite set of transformations that involve neighbouring slices in a transitive fashion. Using the proposed method, we obtained reconstructed volumes with increased coherence compared to the classical pairwise approach, both in synthetic and real data
Genetic improvement of GPU software
We survey genetic improvement (GI) of general purpose computing on graphics cards. We summarise several experiments which demonstrate four themes. Experiments with the gzip program show that genetic programming can automatically port sequential C code to parallel code. Experiments with the StereoCamera program show that GI can upgrade legacy parallel code for new hardware and software. Experiments with NiftyReg and BarraCUDA show that GI can make substantial improvements to current parallel CUDA applications. Finally, experiments with the pknotsRG program show that with semi-automated approaches, enormous speed ups can sometimes be had by growing and grafting new code with genetic programming in combination with human input
Computational modelling of pathogenic protein spread in neurodegenerative diseases
Pathogenic protein accumulation and spread are fundamental principles of neurodegenerative diseases and ultimately account for the atrophy patterns that distinguish these diseases clinically. However, the biological mechanisms that link pathogenic proteins to specific neural network damage patterns have not been defined. We developed computational models for mechanisms of pathogenic protein accumulation, spread and toxic effects in an artificial neural network of cortical columns. By varying simulation parameters we assessed the effects of modelled mechanisms on network breakdown patterns. Our findings suggest that patterns of network breakdown and the convergence of patterns follow rules determined by particular protein parameters. These rules can account for empirical data on pathogenic protein spread in neural networks. This work provides a basis for understanding the effects of pathogenic proteins on neural circuits and predicting progression of neurodegeneration
Acceleration of hippocampal atrophy rates in asymptomatic amyloidosis
Increased rates of brain atrophy measured from serial magnetic resonance imaging precede symptom onset in Alzheimer's disease and may be useful outcome measures for prodromal clinical trials. Appropriate trial design requires a detailed understanding of the relationships between β-amyloid load and accumulation, and rate of brain change at this stage of the disease. Fifty-two healthy individuals (72.3 ± 6.9 years) from Australian Imaging, Biomarkers and Lifestyle Study of Aging had serial (0, 18 m, 36 m) magnetic resonance imaging, (0, 18 m) Pittsburgh compound B positron emission tomography, and clinical assessments. We calculated rates of whole brain and hippocampal atrophy, ventricular enlargement, amyloid accumulation, and cognitive decline. Over 3 years, rates of whole brain atrophy (p < 0.001), left and right hippocampal atrophy (p = 0.001, p = 0.023), and ventricular expansion (p < 0.001) were associated with baseline β-amyloid load. Whole brain atrophy rates were also independently associated with β-amyloid accumulation over the first 18 months (p = 0.003). Acceleration of left hippocampal atrophy rate was associated with baseline β-amyloid load across the cohort (p < 0.02). We provide evidence that rates of atrophy are associated with both baseline β-amyloid load and accumulation, and that there is presymptomatic, amyloid-mediated acceleration of hippocampal atrophy. Clinical trials using rate of hippocampal atrophy as an outcome measure should not assume linear decline in the presymptomatic phase
- …
