138 research outputs found

    Neurophysiological Vocal Source Modeling for Biomarkers of Disease

    Get PDF
    Speech is potentially a rich source of biomarkers for detecting and monitoring neuropsychological disorders. Current biomarkers typically comprise acoustic descriptors extracted from behavioral measures of source, filter, prosodic and linguistic cues. In contrast, in this paper, we extract vocal features based on a neurocomputational model of speech production, reflecting latent or internal motor control parameters that may be more sensitive to individual variation under neuropsychological disease. These features, which are constrained by neurophysiology, may be resilient to artifacts and provide an articulatory complement to acoustic features. Our features represent a mapping from a low-dimensional acoustics-based feature space to a high-dimensional space that captures the underlying neural process including articulatory commands and auditory and somatosensory feedback errors. In particular, we demonstrate a neurophysiological vocal source model that generates biomarkers of disease by modeling vocal source control. By using the fundamental frequency contour and a biophysical representation of the vocal source, we infer two neuromuscular time series whose coordination provides vocal features that are applied to depression and Parkinson’s disease as examples. These vocal source coordination features alone, on a single held vowel, outperform or are comparable to other features sets and reflect a significant compression of the feature space.United States. Air Force (Contract No. FA8721-05-C-0002)United States. Air Force (Contract No. FA8702-15- D-0001

    Learning from open source software projects to improve scientific review

    Get PDF
    Peer-reviewed publications are the primary mechanism for sharing scientific results. The current peer-review process is, however, fraught with many problems that undermine the pace, validity, and credibility of science. We highlight five salient problems: (1) reviewers are expected to have comprehensive expertise; (2) reviewers do not have sufficient access to methods and materials to evaluate a study; (3) reviewers are neither identified nor acknowledged; (4) there is no measure of the quality of a review; and (5) reviews take a lot of time, and once submitted cannot evolve. We propose that these problems can be resolved by making the following changes to the review process. Distributing reviews to many reviewers would allow each reviewer to focus on portions of the article that reflect the reviewer's specialty or area of interest and place less of a burden on any one reviewer. Providing reviewers materials and methods to perform comprehensive evaluation would facilitate transparency, greater scrutiny, and replication of results. Acknowledging reviewers makes it possible to quantitatively assess reviewer contributions, which could be used to establish the impact of the reviewer in the scientific community. Quantifying review quality could help establish the importance of individual reviews and reviewers as well as the submitted article. Finally, we recommend expediting post-publication reviews and allowing for the dialog to continue and flourish in a dynamic and interactive manner. We argue that these solutions can be implemented by adapting existing features from open-source software management and social networking technologies. We propose a model of an open, interactive review system that quantifies the significance of articles, the quality of reviews, and the reputation of reviewers

    Knowing what you know in brain segmentation using Bayesian deep neural networks

    Full text link
    In this paper, we describe a Bayesian deep neural network (DNN) for predicting FreeSurfer segmentations of structural MRI volumes, in minutes rather than hours. The network was trained and evaluated on a large dataset (n = 11,480), obtained by combining data from more than a hundred different sites, and also evaluated on another completely held-out dataset (n = 418). The network was trained using a novel spike-and-slab dropout-based variational inference approach. We show that, on these datasets, the proposed Bayesian DNN outperforms previously proposed methods, in terms of the similarity between the segmentation predictions and the FreeSurfer labels, and the usefulness of the estimate uncertainty of these predictions. In particular, we demonstrated that the prediction uncertainty of this network at each voxel is a good indicator of whether the network has made an error and that the uncertainty across the whole brain can predict the manual quality control ratings of a scan. The proposed Bayesian DNN method should be applicable to any new network architecture for addressing the segmentation problem.Comment: Submitted to Frontiers in Neuroinformatic

    Brain Bases of Reading Fluency in Typical Reading and Impaired Fluency in Dyslexia

    Get PDF
    Although the neural systems supporting single word reading are well studied, there are limited direct comparisons between typical and dyslexic readers of the neural correlates of reading fluency. Reading fluency deficits are a persistent behavioral marker of dyslexia into adulthood. The current study identified the neural correlates of fluent reading in typical and dyslexic adult readers, using sentences presented in a word-by-word format in which single words were presented sequentially at fixed rates. Sentences were presented at slow, medium, and fast rates, and participants were asked to decide whether each sentence did or did not make sense semantically. As presentation rates increased, participants became less accurate and slower at making judgments, with comprehension accuracy decreasing disproportionately for dyslexic readers. In-scanner performance on the sentence task correlated significantly with standardized clinical measures of both reading fluency and phonological awareness. Both typical readers and readers with dyslexia exhibited widespread, bilateral increases in activation that corresponded to increases in presentation rate. Typical readers exhibited significantly larger gains in activation as a function of faster presentation rates than readers with dyslexia in several areas, including left prefrontal and left superior temporal regions associated with semantic retrieval and semantic and phonological representations. Group differences were more extensive when behavioral differences between conditions were equated across groups. These findings suggest a brain basis for impaired reading fluency in dyslexia, specifically a failure of brain regions involved in semantic retrieval and semantic and phonological representations to become fully engaged for comprehension at rapid reading rates

    Predicting Treatment Response in Social Anxiety Disorder From Functional Magnetic Resonance Imaging

    Get PDF
    Context: Current behavioral measures poorly predict treatment outcome in social anxiety disorder (SAD). To our knowledge, this is the first study to examine neuroimaging-based treatment prediction in SAD. Objective: To measure brain activation in patients with SAD as a biomarker to predict subsequent response to cognitive behavioral therapy (CBT). Design: Functional magnetic resonance imaging (fMRI) data were collected prior to CBT intervention. Changes in clinical status were regressed on brain responses and tested for selectivity for social stimuli. Setting: Patients were treated with protocol-based CBT at anxiety disorder programs at Boston University or Massachusetts General Hospital and underwent neuroimaging data collection at Massachusetts Institute of Technology. Patients: Thirty-nine medication-free patients meeting DSM-IV criteria for the generalized subtype of SAD. Interventions: Brain responses to angry vs neutral faces or emotional vs neutral scenes were examined with fMRI prior to initiation of CBT. Main Outcome Measures: Whole-brain regression analyses with differential fMRI responses for angry vs neutral faces and changes in Liebowitz Social Anxiety Scale score as the treatment outcome measure. Results: Pretreatment responses significantly predicted subsequent treatment outcome of patients selectively for social stimuli and particularly in regions of higher-order visual cortex. Combining the brain measures with information on clinical severity accounted for more than 40% of the variance in treatment response and substantially exceeded predictions based on clinical measures at baseline. Prediction success was unaffected by testing for potential confounding factors such as depression severity at baseline. Conclusions: The results suggest that brain imaging can provide biomarkers that substantially improve predictions for the success of cognitive behavioral interventions and more generally suggest that such biomarkers may offer evidence-based, personalized medicine approaches for optimally selecting among treatment options for a patient

    NeuroVault.org : a web-based repository for collecting and sharing unthresholded statistical maps of the human brain

    Get PDF
    Here we present NeuroVault-a web based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain. NeuroVault is easy to use and employs modern web technologies to provide informative visualization of data without the need to install additional software. In addition, it leverages the power of the Neurosynth database to provide cognitive decoding of deposited maps. The data are exposed through a public REST API enabling other services and tools to take advantage of it. NeuroVault is a new resource for researchers interested in conducting meta- and coactivation analyses

    Situating the default-mode network along a principal gradient of macroscale cortical organization

    Get PDF
    Understanding how the structure of cognition arises from the topographical organization of the cortex is a primary goal in neuroscience. Previous work has described local functional gradients extending from perceptual and motor regions to cortical areas representing more abstract functions, but an overarching framework for the association between structure and function is still lacking. Here, we show that the principal gradient revealed by the decomposition of connectivity data in humans and the macaque monkey is anchored by, at one end, regions serving primary sensory/motor functions and at the other end, transmodal regions that, in humans, are known as the default-mode network (DMN). These DMN regions exhibit the greatest geodesic distance along the cortical surface-and are precisely equidistant-from primary sensory/motor morphological landmarks. The principal gradient also provides an organizing spatial framework for multiple large-scale networks and characterizes a spectrum from unimodal to heteromodal activity in a functional metaanalysis. Together, these observations provide a characterization of the topographical organization of cortex and indicate that the role of the DMN in cognition might arise from its position at one extreme of a hierarchy, allowing it to process transmodal information that is unrelated to immediate sensory input

    Data sharing in neuroimaging research

    Get PDF
    Significant resources around the world have been invested in neuroimaging studies of brain function and disease. Easier access to this large body of work should have profound impact on research in cognitive neuroscience and psychiatry, leading to advances in the diagnosis and treatment of psychiatric and neurological disease. A trend toward increased sharing of neuroimaging data has emerged in recent years. Nevertheless, a number of barriers continue to impede momentum. Many researchers and institutions remain uncertain about how to share data or lack the tools and expertise to participate in data sharing. The use of electronic data capture (EDC) methods for neuroimaging greatly simplifies the task of data collection and has the potential to help standardize many aspects of data sharing. We review here the motivations for sharing neuroimaging data, the current data sharing landscape, and the sociological or technical barriers that still need to be addressed. The INCF Task Force on Neuroimaging Datasharing, in conjunction with several collaborative groups around the world, has started work on several tools to ease and eventually automate the practice of data sharing. It is hoped that such tools will allow researchers to easily share raw, processed, and derived neuroimaging data, with appropriate metadata and provenance records, and will improve the reproducibility of neuroimaging studies. By providing seamless integration of data sharing and analysis tools within a commodity research environment, the Task Force seeks to identify and minimize barriers to data sharing in the field of neuroimaging
    corecore