392 research outputs found

    Diffeomorphic demons using normalized mutual information, evaluation on multimodal brain MR images

    Get PDF
    The demons algorithm is a fast non-parametric non-rigid registration method. In recent years great efforts have been made to improve the approach; the state of the art version yields symmetric inverse-consistent largedeformation diffeomorphisms. However, only limited work has explored inter-modal similarity metrics, with no practical evaluation on multi-modality data. We present a diffeomorphic demons implementation using the analytical gradient of Normalised Mutual Information (NMI) in a conjugate gradient optimiser. We report the first qualitative and quantitative assessment of the demons for inter-modal registration. Experiments to spatially normalise real MR images, and to recover simulated deformation fields, demonstrate (i) similar accuracy from NMI-demons and classical demons when the latter may be used, and (ii) similar accuracy for NMI-demons on T1w-T1w and T1w-T2w registration, demonstrating its potential in multi-modal scenarios

    Diffeomorphic Demons using Normalised Mutual Information, Evaluation on Multi-Modal Brain MR Images

    Get PDF
    The demons algorithm is a fast non-parametric non-rigid registration method. In recent years great efforts have been made to improve the approach; the state of the art version yields symmetric inverse-consistent large-deformation diffeomorphisms. However, only limited work has explored inter-modal similarity metrics, with no practical evaluation on multi-modality data. We present a diffeomorphic demons implementation using the analytical gradient of Normalised Mutual Information (NMI) in a conjugate gradient optimiser. We report the first qualitative and quantitative assessment of the demons for inter-modal registration. Experiments to spatially normalise real MR images, and to recover simulated deformation fields, demonstrate (i) similar accuracy from NMI-demons and classical demons when the latter may be used, and (ii) similar accuracy for NMI-demons on T1w-T1w and T1w-T2w registration, demonstrating its potential in multi-modal scenarios

    Simultaneous synthesis of FLAIR and segmentation of white matter hypointensities from T1 MRIs

    Full text link
    Segmenting vascular pathologies such as white matter lesions in Brain magnetic resonance images (MRIs) require acquisition of multiple sequences such as T1-weighted (T1-w) --on which lesions appear hypointense-- and fluid attenuated inversion recovery (FLAIR) sequence --where lesions appear hyperintense--. However, most of the existing retrospective datasets do not consist of FLAIR sequences. Existing missing modality imputation methods separate the process of imputation, and the process of segmentation. In this paper, we propose a method to link both modality imputation and segmentation using convolutional neural networks. We show that by jointly optimizing the imputation network and the segmentation network, the method not only produces more realistic synthetic FLAIR images from T1-w images, but also improves the segmentation of WMH from T1-w images only.Comment: Conference on Medical Imaging with Deep Learning MIDL 201

    Robust training of recurrent neural networks to handle missing data for disease progression modeling

    Get PDF
    Disease progression modeling (DPM) using longitudinal data is a challenging task in machine learning for healthcare that can provide clinicians with better tools for diagnosis and monitoring of disease. Existing DPM algorithms neglect temporal dependencies among measurements and make parametric assumptions about biomarker trajectories. In addition, they do not model multiple biomarkers jointly and need to align subjects' trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. We, therefore, propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle missing values in both target and predictor variables. This algorithm is applied for modeling the progression of Alzheimer's disease (AD) using magnetic resonance imaging (MRI) biomarkers. The results show that the proposed LSTM algorithm achieves a lower mean absolute error for prediction of measurements across all considered MRI biomarkers compared to using standard LSTM networks with data imputation or using a regression-based DPM method. Moreover, applying linear discriminant analysis to the biomarkers' values predicted by the proposed algorithm results in a larger area under the receiver operating characteristic curve (AUC) for clinical diagnosis of AD compared to the same alternatives, and the AUC is comparable to state-of-the-art AUCs from a recent cross-sectional medical image classification challenge. This paper shows that built-in handling of missing values in LSTM network training paves the way for application of RNNs in disease progression modeling.Comment: 9 pages, 1 figure, MIDL conferenc

    Training recurrent neural networks robust to incomplete data: application to Alzheimer's disease progression modeling

    Full text link
    Disease progression modeling (DPM) using longitudinal data is a challenging machine learning task. Existing DPM algorithms neglect temporal dependencies among measurements, make parametric assumptions about biomarker trajectories, do not model multiple biomarkers jointly, and need an alignment of subjects' trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. Instead, we propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle both missing predictor and target values. The proposed LSTM algorithm is applied to model the progression of Alzheimer's disease (AD) using six volumetric magnetic resonance imaging (MRI) biomarkers, i.e., volumes of ventricles, hippocampus, whole brain, fusiform, middle temporal gyrus, and entorhinal cortex, and it is compared to standard LSTM networks with data imputation and a parametric, regression-based DPM method. The results show that the proposed algorithm achieves a significantly lower mean absolute error (MAE) than the alternatives with p < 0.05 using Wilcoxon signed rank test in predicting values of almost all of the MRI biomarkers. Moreover, a linear discriminant analysis (LDA) classifier applied to the predicted biomarker values produces a significantly larger AUC of 0.90 vs. at most 0.84 with p < 0.001 using McNemar's test for clinical diagnosis of AD. Inspection of MAE curves as a function of the amount of missing data reveals that the proposed LSTM algorithm achieves the best performance up until more than 74% missing values. Finally, it is illustrated how the method can successfully be applied to data with varying time intervals.Comment: arXiv admin note: substantial text overlap with arXiv:1808.0550

    Efficient dense non-rigid registration using the free-form deformation framework

    Get PDF
    Medical image registration consists of finding spatial correspondences between two images or more. It is a powerful tool which is commonly used in various medical image processing tasks. Even though medical image registration has been an active topic of research for the last two decades, significant challenges in the field remain to be solved. This thesis addresses some of these challenges through extensions to the Free-Form Deformation (FFD) registration framework, which is one of the most widely used and well-established non-rigid registration algorithm. Medical image registration is a computationally expensive task because of the high degrees of freedom of the non-rigid transformations. In this work, the FFD algorithm has been re-factored to enable fast processing, while maintaining the accuracy of the results. In addition, parallel computing paradigms have been employed to provide near real-time image registration capabilities. Further modifications have been performed to improve the registration robustness to artifacts such as tissues non-uniformity. The plausibility of the generated deformation field has been improved through the use of bio-mechanical models based regularization. Additionally, diffeomorphic extensions to the algorithm were also developed. The work presented in this thesis has been extensively validated using brain magnetic resonance imaging of patients diagnosed with dementia or patients undergoing brain resection. It has also been applied to lung X-ray computed tomography and imaging of small animals. Alongside with this thesis an open-source package, NiftyReg, has been developed to release the presented work to the medical imaging community

    A Multi-Path Approach to Histology Volume Reconstruction

    Get PDF
    This paper presents a method for correcting erratic pairwise registrations when reconstructing a volume from 2D histology slices. Due to complex and unpredictable alterations of the content of histology images, a pairwise rigid registration between two adjacent slices may fail systematically. Conversely, a neighbouring registration, which potentially involves one of these two slices, will work. This grounds our approach: using correct spatial correspondences established through neighbouring registrations to account for direct failures. We propose to search the best alignment of every couple of adjacent slices from a finite set of transformations that involve neighbouring slices in a transitive fashion. Using the proposed method, we obtained reconstructed volumes with increased coherence compared to the classical pairwise approach, both in synthetic and real data

    Forward-Backward Splitting in Deformable Image Registration: A Demons Approach

    Get PDF
    Efficient non-linear image registration implementations are key for many biomedical imaging applications. By using the classical demons approach, the associated optimization problem is solved by an alternate optimization scheme consisting of a gradient descent step followed by Gaussian smoothing. Despite being simple and powerful, the solution of the underlying relaxed formulation is not guaranteed to minimize the original global energy. Implicitly, however, this second step can be recast as the proximal map of the regularizer. This interpretation introduces a parallel to the more general Forward-Backward Splitting (FBS) scheme consisting of a forward gradient descent and proximal step. By shifting entirely to FBS, we can take advantage of the recent advances in FBS methods and solve the original, non-relaxed deformable registration problem for any type of differentiable similarity measure and convex regularization associated with a tractable proximal operator. Additionally, global convergence to a critical point is guaranteed under weak restrictions. For the first time in the context of image registration, we show that Tikhonov regularization breaks down to the simple use of B-Spline filtering in the proximal step. We demonstrate the versatility of FBS by encoding spatial transformation as displacement fields or free-form B-Spline deformations. We use state-of-the-art FBS solvers and compare their performance against the classical demons, the recently proposed inertial demons and the conjugate gradient optimizer. Numerical experiments performed on both synthetic and clinical data show the advantage of FBS in image registration in terms of both convergence and accuracy

    Genetic improvement of GPU software

    Get PDF
    We survey genetic improvement (GI) of general purpose computing on graphics cards. We summarise several experiments which demonstrate four themes. Experiments with the gzip program show that genetic programming can automatically port sequential C code to parallel code. Experiments with the StereoCamera program show that GI can upgrade legacy parallel code for new hardware and software. Experiments with NiftyReg and BarraCUDA show that GI can make substantial improvements to current parallel CUDA applications. Finally, experiments with the pknotsRG program show that with semi-automated approaches, enormous speed ups can sometimes be had by growing and grafting new code with genetic programming in combination with human input
    • …
    corecore