7 research outputs found
Scanner Invariant Representations for Diffusion MRI Harmonization
Purpose: In the present work we describe the correction of diffusion-weighted
MRI for site and scanner biases using a novel method based on invariant
representation.
Theory and Methods: Pooled imaging data from multiple sources are subject to
variation between the sources. Correcting for these biases has become very
important as imaging studies increase in size and multi-site cases become more
common. We propose learning an intermediate representation invariant to
site/protocol variables, a technique adapted from information theory-based
algorithmic fairness; by leveraging the data processing inequality, such a
representation can then be used to create an image reconstruction that is
uninformative of its original source, yet still faithful to underlying
structures. To implement this, we use a deep learning method based on
variational auto-encoders (VAE) to construct scanner invariant encodings of the
imaging data.
Results: To evaluate our method, we use training data from the 2018 MICCAI
Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our
proposed method shows improvements on independent test data relative to a
recently published baseline method on each subtask, mapping data from three
different scanning contexts to and from one separate target scanning context.
Conclusion: As imaging studies continue to grow, the use of pooled multi-site
imaging will similarly increase. Invariant representation presents a strong
candidate for the harmonization of these data
Cross-scanner and cross-protocol multi-shell diffusion MRI data harmonization: algorithms and result
Cross-scanner and cross-protocol variability of diffusion magnetic resonance imaging (dMRI) data are known to be major obstacles in multi-site clinical studies since they limit the ability to aggregate dMRI data and derived measures. Computational algorithms that harmonize the data and minimize such variability are critical to reliably combine datasets acquired from different scanners and/or protocols, thus improving the statistical power and sensitivity of multi-site studies. Different computational approaches have been proposed to harmonize diffusion MRI data or remove scanner-specific differences. To date, these methods have mostly been developed for or evaluated on single b-value diffusion MRI data. In this work, we present the evaluation results of 19 algorithms that are developed to harmonize the cross-scanner and cross-protocol variability of multi-shell diffusion MRI using a benchmark database. The proposed algorithms rely on various signal representation approaches and computational tools, such as rotational invariant spherical harmonics, deep neural networks and hybrid biophysical and statistical approaches. The benchmark database consists of data acquired from the same subjects on two scanners with different maximum gradient strength (80 and 300 ​mT/m) and with two protocols. We evaluated the performance of these algorithms for mapping multi-shell diffusion MRI data across scanners and across protocols using several state-of-the-art imaging measures. The results show that data harmonization algorithms can reduce the cross-scanner and cross-protocol variabilities to a similar level as scan-rescan variability using the same scanner and protocol. In particular, the LinearRISH algorithm based on adaptive linear mapping of rotational invariant spherical harmonics features yields the lowest variability for our data in predicting the fractional anisotropy (FA), mean diffusivity (MD), mean kurtosis (MK) and the rotationally invariant spherical harmonic (RISH) features. But other algorithms, such as DIAMOND, SHResNet, DIQT, CMResNet show further improvement in harmonizing the return-to-origin probability (RTOP). The performance of different approaches provides useful guidelines on data harmonization in future multi-site studies
A deep learning–based method for improving reliability of multicenter diffusion kurtosis imaging with varied acquisition protocols
Multicenter magnetic resonance imaging is gaining more popularity in large-sample projects. Since both varying hardware and software across different centers cause unavoidable data heterogeneity across centers, its impact on reliability in study outcomes has also drawn much attention recently. One fundamental issue arises in how to derive model parameters reliably from image data of varying quality. This issue is even more challenging for advanced diffusion methods such as diffusion kurtosis imaging (DKI). Recently, deep learning–based methods have been demonstrated with their potential for robust and efficient computation of diffusion-derived measures. Inspired by these approaches, the current study specifically designed a framework based on a three-dimensional hierarchical convolutional neural network, to jointly reconstruct and harmonize DKI measures from multicenter acquisition to reformulate these to a state-of-the-art hardware using data from traveling subjects. The results from the harmonized data acquired with different protocols show that: 1) the inter-scanner variation of DKI measures within white matter was reduced by 51.5% in mean kurtosis, 65.9% in axial kurtosis, 53.7% in radial kurtosis, and 61.5% in kurtosis fractional anisotropy, respectively; 2) data reliability of each single scanner was enhanced and brought to the level of the reference scanner; and 3) the harmonization network was able to reconstruct reliable DKI values from high data variability. Overall the results demonstrate the feasibility of the proposed deep learning–based method for DKI harmonization and help to simplify the protocol setup procedure for multicenter scanners with different hardware and software configurations
Recommended from our members
Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions.
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
Recommended from our members
Methods for Data Management in Multi-Centre MRI Studies and Applications to Traumatic Brain Injury
Neuroimaging studies are becoming increasingly bigger, and multi-centre collaborations to collect data under similar protocols, but different scanning sites, are now commonplace.However, with increasing sample size the complexity of databases and the entailed data management as well as computational burden are growing. This thesis aims to highlight and address challenges faced by large multi-centre magnetic resonance imaging(MRI) studies. The methods implemented are then applied to traumatic brain injury (TBI) data.Firstly, a pre-processing pipeline for both anatomical and diffusion MRI was proposed, that allows for a high throughput of MRI scans. After describing the choices for processing tools,the performance of the integrated quality assurance was assessed based on the results from a large multi-centre dataset for TBI. Secondly, the applicability of the pipelines for processing mild TBI (mTBI) data from three sites was shown in a case study. For this, volumetric and diffusion metrics in the acute phase are analysed for their prognostic potential. Further-more, the cohort was examined for longitudinal changes. Thirdly, independent scan-rescan datasets are examined to gain a better understanding of the degree of reproducibility which can be achieved in imaging studies. This involves analysing the robustness of brain parcellations based on structural or diffusion imaging. The effect of using different MRI scanners or imaging protocols was also assessed and discussed. Fourthly, sources of diffusion MRI variability and different approaches to cope with these are reviewed. Using this foundation,state-of-the art methods for diffusion MRI harmonisation were compared against each other using both a benchmark dataset and mTBI cohort. Lastly, a solution to localise brain lesions was proposed. Its implications for lesion analysis, are assessed in the light of an application to a more severe TBI patient cohort, imaged on two different scanners. Furthermore, a lesion matching algorithm was introduced to automatically examine lesion evolution with time post-injury. In summary, this thesis explored different options for MRI data analysis in the context of large multi-centre studies. Different approaches are studied and compared using a number of different MRI datasets, including scan-rescan data across different MRI scanners and imaging protocols. The potential of the optimised solutions was illustrated through applications to TBI data.CENTER-TB