61 research outputs found
Combining Citizen Science and Deep Learning to Amplify Expertise in Neuroimaging
Big Data promises to advance science through data-driven discovery. However, many standard lab protocols rely on manual examination, which is not feasible for large-scale datasets. Meanwhile, automated approaches lack the accuracy of expert examination. We propose to (1) start with expertly labeled data, (2) amplify labels through web applications that engage citizen scientists, and (3) train machine learning on amplified labels, to emulate the experts. Demonstrating this, we developed a system to quality control brain magnetic resonance images. Expert-labeled data were amplified by citizen scientists through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on citizen scientist labels. Deep learning performed as well as specialized algorithms for quality control (AUC = 0.99). Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in disciplines where specialized, automated tools do not yet exist
Effect of relative humidity on the evaporation of a colloidal solution droplet
International audienceBackground The deposition of uniform layers of colloids on a solid surface is a major challenge for several industrial processes such as glass surface treatment and creating optical filters. One strategy involves the deposition of the colloids behind a contact line that recedes due to hydrodynamic reasons and evaporation (drying). The interaction between deposition, evaporation and hydrodynamics is a complex matter. We need to get a better understanding of the mechanisms at the contact line and the role they play in coating an organized deposit [1]
From the Wet Lab to the Web Lab: A Paradigm Shift in Brain Imaging Research
Web technology has transformed our lives, and has led to a paradigm shift in the computational sciences. As the neuroimaging informatics research community amasses large datasets to answer complex neuroscience questions, we find that the web is the best medium to facilitate novel insights by way of improved collaboration and communication. Here, we review the landscape of web technologies used in neuroimaging research, and discuss future applications, areas for improvement, and the limitations of using web technology in research. Fully incorporating web technology in our research lifecycle requires not only technical skill, but a widespread culture change; a shift from the small, focused “wet lab” to a multidisciplinary and largely collaborative “web lab.
Mindcontrol: a web application for brain segmentation quality control
Tissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study
Predicting Treatment Response in Social Anxiety Disorder From Functional Magnetic Resonance Imaging
Context: Current behavioral measures poorly predict treatment outcome in social anxiety disorder (SAD). To our knowledge, this is the first study to examine neuroimaging-based treatment prediction in SAD.
Objective: To measure brain activation in patients with SAD as a biomarker to predict subsequent response to cognitive behavioral therapy (CBT).
Design: Functional magnetic resonance imaging (fMRI) data were collected prior to CBT intervention. Changes in clinical status were regressed on brain responses and tested for selectivity for social stimuli.
Setting: Patients were treated with protocol-based CBT at anxiety disorder programs at Boston University or Massachusetts General Hospital and underwent neuroimaging data collection at Massachusetts Institute of Technology.
Patients: Thirty-nine medication-free patients meeting DSM-IV criteria for the generalized subtype of SAD.
Interventions: Brain responses to angry vs neutral faces or emotional vs neutral scenes were examined with fMRI prior to initiation of CBT.
Main Outcome Measures: Whole-brain regression analyses with differential fMRI responses for angry vs neutral faces and changes in Liebowitz Social Anxiety Scale score as the treatment outcome measure.
Results: Pretreatment responses significantly predicted subsequent treatment outcome of patients selectively for social stimuli and particularly in regions of higher-order visual cortex. Combining the brain measures with information on clinical severity accounted for more than 40% of the variance in treatment response and substantially exceeded predictions based on clinical measures at baseline. Prediction success was unaffected by testing for potential confounding factors such as depression severity at baseline.
Conclusions: The results suggest that brain imaging can provide biomarkers that substantially improve predictions for the success of cognitive behavioral interventions and more generally suggest that such biomarkers may offer evidence-based, personalized medicine approaches for optimally selecting among treatment options for a patient
Mindboggling morphometry of human brains
Mindboggle (http://mindboggle.info) is an open source brain morphometry platform that takes in preprocessed T1-weighted MRI data and outputs volume, surface, and tabular data containing label, feature, and shape information for further analysis. In this article, we document the software and demonstrate its use in studies of shape variation in healthy and diseased humans. The number of different shape measures and the size of the populations make this the largest and most detailed shape analysis of human brains ever conducted. Brain image morphometry shows great potential for providing much-needed biological markers for diagnosing, tracking, and predicting progression of mental health disorders. Very few software algorithms provide more than measures of volume and cortical thickness, while more subtle shape measures may provide more sensitive and specific biomarkers. Mindboggle computes a variety of (primarily surface-based) shapes: area, volume, thickness, curvature, depth, Laplace-Beltrami spectra, Zernike moments, etc. We evaluate Mindboggle’s algorithms using the largest set of manually labeled, publicly available brain images in the world and compare them against state-of-the-art algorithms where they exist. All data, code, and results of these evaluations are publicly available
Spinal Cord Atrophy Predicts Progressive Disease in Relapsing Multiple Sclerosis
Objective A major challenge in multiple sclerosis (MS) research is the understanding of silent progression and Progressive MS. Using a novel method to accurately capture upper cervical cord area from legacy brain MRI scans we aimed to study the role of spinal cord and brain atrophy for silent progression and conversion to secondary progressive disease (SPMS). Methods From a single-center observational study, all RRMS (n = 360) and SPMS (n = 47) patients and 80 matched controls were evaluated. RRMS patient subsets who converted to SPMS (n = 54) or silently progressed (n = 159), respectively, during the 12-year observation period were compared to clinically matched RRMS patients remaining RRMS (n = 54) or stable (n = 147), respectively. From brain MRI, we assessed the value of brain and spinal cord measures to predict silent progression and SPMS conversion. Results Patients who developed SPMS showed faster cord atrophy rates (-2.19%/yr) at least 4 years before conversion compared to their RRMS matches (-0.88%/yr, p < 0.001). Spinal cord atrophy rates decelerated after conversion (-1.63%/yr, p = 0.010) towards those of SPMS patients from study entry (-1.04%). Each 1% faster spinal cord atrophy rate was associated with 69% (p < 0.0001) and 53% (p < 0.0001) shorter time to silent progression and SPMS conversion, respectively. Interpretation Silent progression and conversion to secondary progressive disease are predominantly related to cervical cord atrophy. This atrophy is often present from the earliest disease stages and predicts the speed of silent progression and conversion to Progressive MS. Diagnosis of SPMS is rather a late recognition of this neurodegenerative process than a distinct disease phase. ANN NEUROL 202
Reply to "Spinal Cord Atrophy Is a Preclinical Marker of Progressive MS"
No abstract availabl
Power estimation for non-standardized multisite studies
AbstractA concern for researchers planning multisite studies is that scanner and T1-weighted sequence-related biases on regional volumes could overshadow true effects, especially for studies with a heterogeneous set of scanners and sequences. Current approaches attempt to harmonize data by standardizing hardware, pulse sequences, and protocols, or by calibrating across sites using phantom-based corrections to ensure the same raw image intensities. We propose to avoid harmonization and phantom-based correction entirely. We hypothesized that the bias of estimated regional volumes is scaled between sites due to the contrast and gradient distortion differences between scanners and sequences. Given this assumption, we provide a new statistical framework and derive a power equation to define inclusion criteria for a set of sites based on the variability of their scaling factors. We estimated the scaling factors of 20 scanners with heterogeneous hardware and sequence parameters by scanning a single set of 12 subjects at sites across the United States and Europe. Regional volumes and their scaling factors were estimated for each site using Freesurfer's segmentation algorithm and ordinary least squares, respectively. The scaling factors were validated by comparing the theoretical and simulated power curves, performing a leave-one-out calibration of regional volumes, and evaluating the absolute agreement of all regional volumes between sites before and after calibration. Using our derived power equation, we were able to define the conditions under which harmonization is not necessary to achieve 80% power. This approach can inform choice of processing pipelines and outcome metrics for multisite studies based on scaling factor variability across sites, enabling collaboration between clinical and research institutions
Wet Lab to Web Lab
Slides and video from presentation at the Open Science Symposium at Carnegie Mellon University on October 18th, 201
- …