CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
Comparison of quality control methods for automated diffusion tensor imaging analysis pipelines
Authors
Christopher J M Scott
Dar Dowlatshahi
+13 more
Joel Ramirez
Manuel Montero-Odasso
Melissa F Holmes
Miracle Ozzoude
Nuwan D Nanayakkara
ONDRI Investigators
Richard H Swartz
Robert Bartha
Sandra E Black
Sean Symons
Seyyed M H Haddad
Stephen C Strother
Stephen R Arnott
Publication date
1 January 2019
Publisher
Scholarship@Western
Doi
Cite
Abstract
© 2019 Haddad et al. The processing of brain diffusion tensor imaging (DTI) data for large cohort studies requires fully automatic pipelines to perform quality control (QC) and artifact/outlier removal procedures on the raw DTI data prior to calculation of diffusion parameters. In this study, three automatic DTI processing pipelines, each complying with the general ENIGMA framework, were designed by uniquely combining multiple image processing software tools. Different QC procedures based on the RESTORE algorithm, the DTIPrep protocol, and a combination of both methods were compared using simulated ground truth and artifact containing DTI datasets modeling eddy current induced distortions, various levels of motion artifacts, and thermal noise. Variability was also examined in 20 DTI datasets acquired in subjects with vascular cognitive impairment (VCI) from the multi-site Ontario Neurodegenerative Disease Research Initiative (ONDRI). The mean fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) were calculated in global brain grey matter (GM) and white matter (WM) regions. For the simulated DTI datasets, the measure used to evaluate the performance of the pipelines was the normalized difference between the mean DTI metrics measured in GM and WM regions and the corresponding ground truth DTI value. The performance of the proposed pipelines was very similar, particularly in FA measurements. However, the pipeline based on the RESTORE algorithm was the most accurate when analyzing the artifact containing DTI datasets. The pipeline that combined the DTIPrep protocol and the RESTORE algorithm produced the lowest standard deviation in FA measurements in normal appearing WM across subjects. We concluded that this pipeline was the most robust and is preferred for automated analysis of multisite brain DTI data
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Scholarship@Western
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:ir.lib.uwo.ca:biophysicspu...
Last time updated on 20/03/2021
Directory of Open Access Journals
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:doaj.org/article:0605edb3b...
Last time updated on 24/06/2021