951 research outputs found

    Development of quality standards for multi-center, longitudinal magnetic resonance imaging studies in clinical neuroscience

    Get PDF
    Magnetic resonance imaging (MRI) data is generated by a complex procedure. Many possible sources of error exist which can lead to a worse signal. For example, hidden defective components of a MRI-scanner, changes in the static magnetic field caused by a person simply moving in the MRI scanner room as well as changes in the measurement sequences can negatively affect the signal-to-noise ratio (SNR). A comprehensive, reproducible, quality assurance (QA) procedure is necessary, to ensure reproducible results both from the MRI equipment and the human operator of the equipment. To examine the quality of the MRI data, there are two possibilities. On the one hand, water or gel-filled objects, so-called "phantoms", are regularly measured. Based on this signal, which in the best case should always be stable, the general performance of the MRI scanner can be tested. On the other hand, the actually interesting data, mostly human data, are checked directly for certain signal parameters (e.g., SNR, motion parameters). This thesis consists of two parts. In the first part a study-specific QA-protocol was developed for a large multicenter MRI-study, FOR2107. The aim of FOR2107 is to investigate the causes and course of affective disorders, unipolar depression and bipolar disorders, taking clinical and neurobiological effects into account. The main aspect of FOR2107 is the MRI-measurement of more than 2000 subjects in a longitudinal design (currently repeated measurements after 2 years, further measurements planned after 5 years). To bring MRI-data and disease history together, MRI-data must provide stable results over the course of the study. Ensuring this stability is dealt with in this part of the work. An extensive QA, based on phantom measurements, human data analysis, protocol compliance testing, etc., was set up. In addition to the development of parameters for the characterization of MRI-data, the used QA-protocols were improved during the study. The differences between sites and the impact of these differences on human data analysis were analyzed. The comprehensive quality assurance for the FOR2107 study showed significant differences in MRI-signal (for human and phantom data) between the centers. Occurring problems could easily be recognized in time and be corrected, and must be included for current and future analyses of human data. For the second part of this thesis, a QA-protocol (and the freely available associated software "LAB-QA2GO") has been developed and tested, and can be used for individual studies or to control the quality of an MRI-scanner. This routine was developed because at many sites and in many studies, no explicit QA is performed nevertheless suitable, freely available QA-software for MRI-measurements is available. With LAB-QA2GO, it is possible to set up a QA-protocol for an MRI-scanner or a study without much effort and IT knowledge. Both parts of the thesis deal with the implementation of QA-procedures. High quality data and study results can be achieved only by the usage of appropriate QA-procedures, as presented in this work. Therefore, QA-measures should be implemented at all levels of a project and should be implemented permanently in project and evaluation routines

    BrainCAT: a tool for automated and combined functional magnetic resonance imaging and diffusion tensor imaging brain connectivity analysis

    Get PDF
    Multimodal neuroimaging studies have recently become a trend in the neuroimaging field and are certainly a standard for the future. Brain connectivity studies combining functional activation patterns using resting-state or task-related functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) tractography have growing popularity. However, there is a scarcity of solutions to perform optimized, intuitive, and consistent multimodal fMRI/DTI studies. Here we propose a new tool, brain connectivity analysis tool (Brain CAT), for an automated and standard multimodal analysis of combined fMRI/DTI data, using freely available tools. With a friendly graphical user interface, BrainCAT aims to make data processing easier and faster, implementing a fully automated data processing pipeline and minimizing the need for user intervention, which hopefully will expand the use of combined fMRI/DTI studies. Its validity was tested in an aging study of the default mode network (DMN) white matter connectivity. The results evidenced the cingulum bundle as the structural connector of the precuneus/posterior cingulate cortex and the medial frontal cortex, regions of the DMN. Moreover, mean fractional anisotropy (FA) values along the cingulum extracted with BrainCAT showed a strong correlation with FA values from the manual selection of the same bundle. Taken together, these results provide evidence that BrainCAT is suitable for these analyses.The authors thank the developers of all the software tools used by BrainCAT, namely, MRIcron, FSL, Diffusion Toolkit, and TrackVis. This work was supported by SwitchBox-FP7-HEALTH-2010-grant 259772-2

    The ENIGMA Stroke Recovery Working Group: Big data neuroimaging to study brain–behavior relationships after stroke

    Get PDF
    The goal of the Enhancing Neuroimaging Genetics through Meta‐Analysis (ENIGMA) Stroke Recovery working group is to understand brain and behavior relationships using well‐powered meta‐ and mega‐analytic approaches. ENIGMA Stroke Recovery has data from over 2,100 stroke patients collected across 39 research studies and 10 countries around the world, comprising the largest multisite retrospective stroke data collaboration to date. This article outlines the efforts taken by the ENIGMA Stroke Recovery working group to develop neuroinformatics protocols and methods to manage multisite stroke brain magnetic resonance imaging, behavioral and demographics data. Specifically, the processes for scalable data intake and preprocessing, multisite data harmonization, and large‐scale stroke lesion analysis are described, and challenges unique to this type of big data collaboration in stroke research are discussed. Finally, future directions and limitations, as well as recommendations for improved data harmonization through prospective data collection and data management, are provided

    Automatic Autism Spectrum Disorder Detection Using Artificial Intelligence Methods with MRI Neuroimaging: A Review

    Full text link
    Autism spectrum disorder (ASD) is a brain condition characterized by diverse signs and symptoms that appear in early childhood. ASD is also associated with communication deficits and repetitive behavior in affected individuals. Various ASD detection methods have been developed, including neuroimaging modalities and psychological tests. Among these methods, magnetic resonance imaging (MRI) imaging modalities are of paramount importance to physicians. Clinicians rely on MRI modalities to diagnose ASD accurately. The MRI modalities are non-invasive methods that include functional (fMRI) and structural (sMRI) neuroimaging methods. However, the process of diagnosing ASD with fMRI and sMRI for specialists is often laborious and time-consuming; therefore, several computer-aided design systems (CADS) based on artificial intelligence (AI) have been developed to assist the specialist physicians. Conventional machine learning (ML) and deep learning (DL) are the most popular schemes of AI used for diagnosing ASD. This study aims to review the automated detection of ASD using AI. We review several CADS that have been developed using ML techniques for the automated diagnosis of ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of the studies developed using DL is provided in the appendix. Then, the challenges encountered during the automated diagnosis of ASD using MRI and AI techniques are described in detail. Additionally, a graphical comparison of studies using ML and DL to diagnose ASD automatically is discussed. We conclude by suggesting future approaches to detecting ASDs using AI techniques and MRI neuroimaging

    MAGNIMS recommendations for harmonization of MRI data in MS multicenter studies

    Get PDF
    Harmonization; MRI; Multiple sclerosisHarmonització; Ressonància magnètica; Esclerosi múltipleArmonización; Resonancia magnética; Esclerosis múltipleThere is an increasing need of sharing harmonized data from large, cooperative studies as this is essential to develop new diagnostic and prognostic biomarkers. In the field of multiple sclerosis (MS), the issue has become of paramount importance due to the need to translate into the clinical setting some of the most recent MRI achievements. However, differences in MRI acquisition parameters, image analysis and data storage across sites, with their potential bias, represent a substantial constraint. This review focuses on the state of the art, recent technical advances, and desirable future developments of the harmonization of acquisition, analysis and storage of large-scale multicentre MRI data of MS cohorts. Huge efforts are currently being made to achieve all the requirements needed to provide harmonized MRI datasets in the MS field, as proper management of large imaging datasets is one of our greatest opportunities and challenges in the coming years. Recommendations based on these achievements will be provided here. Despite the advances that have been made, the complexity of these tasks requires further research by specialized academical centres, with dedicated technical and human resources. Such collective efforts involving different professional figures are of crucial importance to offer to MS patients a personalised management while minimizing consumption of resources

    The Human Connectome Project: A retrospective

    Get PDF
    The Human Connectome Project (HCP) was launched in 2010 as an ambitious effort to accelerate advances in human neuroimaging, particularly for measures of brain connectivity; apply these advances to study a large number of healthy young adults; and freely share the data and tools with the scientific community. NIH awarded grants to two consortia; this retrospective focuses on the WU-Minn-Ox HCP consortium centered at Washington University, the University of Minnesota, and University of Oxford. In just over 6 years, the WU-Minn-Ox consortium succeeded in its core objectives by: 1) improving MR scanner hardware, pulse sequence design, and image reconstruction methods, 2) acquiring and analyzing multimodal MRI and MEG data of unprecedented quality together with behavioral measures from more than 1100 HCP participants, and 3) freely sharing the data (via the ConnectomeDB database) and associated analysis and visualization tools. To date, more than 27 Petabytes of data have been shared, and 1538 papers acknowledging HCP data use have been published. The HCP-style neuroimaging paradigm has emerged as a set of best-practice strategies for optimizing data acquisition and analysis. This article reviews the history of the HCP, including comments on key events and decisions associated with major project components. We discuss several scientific advances using HCP data, including improved cortical parcellations, analyses of connectivity based on functional and diffusion MRI, and analyses of brain-behavior relationships. We also touch upon our efforts to develop and share a variety of associated data processing and analysis tools along with detailed documentation, tutorials, and an educational course to train the next generation of neuroimagers. We conclude with a look forward at opportunities and challenges facing the human neuroimaging field from the perspective of the HCP consortium

    MAGNIMS recommendations for harmonization of MRI data in MS multicenter studies

    Get PDF
    There is an increasing need of sharing harmonized data from large, cooperative studies as this is essential to develop new diagnostic and prognostic biomarkers. In the field of multiple sclerosis (MS), the issue has become of paramount importance due to the need to translate into the clinical setting some of the most recent MRI achievements. However, differences in MRI acquisition parameters, image analysis and data storage across sites, with their potential bias, represent a substantial constraint. This review focuses on the state of the art, recent technical advances, and desirable future developments of the harmonization of acquisition, analysis and storage of large-scale multicentre MRI data of MS cohorts. Huge efforts are currently being made to achieve all the requirements needed to provide harmonized MRI datasets in the MS field, as proper management of large imaging datasets is one of our greatest opportunities and challenges in the coming years. Recommendations based on these achievements will be provided here. Despite the advances that have been made, the complexity of these tasks requires further research by specialized academical centres, with dedicated technical and human resources. Such collective efforts involving different professional figures are of crucial importance to offer to MS patients a personalised management while minimizing consumption of resource
    corecore