99 research outputs found
Methods for analysis of brain connectivity : An IFCN-sponsored review
The goal of this paper is to examine existing methods to study the "Human Brain Connectome" with a specific focus on the neurophysiological ones. In recent years, a new approach has been developed to evaluate the anatomical and functional organization of the human brain: the aim of this promising multimodality effort is to identify and classify neuronal networks with a number of neurobiologically meaningful and easily computable measures to create its connectome. By defining anatomical and functional connections of brain regions on the same map through an integrated approach, comprising both modern neurophysiological and neuroimaging (i.e. flow/metabolic) brain-mapping techniques, network analysis becomes a powerful tool for exploring structural-functional connectivity mechanisms and for revealing etiological relationships that link connectivity abnormalities to neuropsychiatric disorders. Following a recent IFCN-endorsed meeting, a panel of international experts was selected to produce this current state-of-art document, which covers the available knowledge on anatomical and functional connectivity, including the most commonly used structural and functional MRI, EEG, MEG and non-invasive brain stimulation techniques and measures of local and global brain connectivity. (C) 2019 Published by Elsevier B.V. on behalf of International Federation of Clinical Neurophysiology.Peer reviewe
Is the Homunculus "Aware" of Sensory Adaptation?
Neural activity and perception are both affected by sensory history. The work presented here explores the relationship between the physiological effects of adaptation and their perceptual consequences. Perception is modeled as arising from an encoder-decoder cascade, in which the encoder is defined by the probabilistic response of a population of neurons, and the decoder transforms this population activity into a perceptual estimate. Adaptation is assumed to produce changes in the encoder, and we examine the conditions under which the decoder behavior is consistent with observed perceptual effects in terms of both bias and discriminability. We show that for all decoders, discriminability is bounded from below by the inverse Fisher information. Estimation bias, on the other hand, can arise for a variety of different reasons and can range from zero to substantial. We specifically examine biases that arise when the decoder is fixed, âunaware â of the changes in the encoding population (as opposed to âaware â of the adaptation and changing accordingly). We simulate the effects of adaptation on two well-studied sensory attributes, motion direction and contrast, assuming a gain change description of encoder adaptation. Although we cannot uniquely constrain the source of decoder bias, we find for both motion and contrast that an âunaware â decoder that maximizes the likelihood of the percept given by the preadaptation encoder leads to predictions that are consistent with behavioral data. This model implies that adaptation-induced biases arise as a result of temporary suboptimality of the decoder
Recommended from our members
Open science, communal culture, and womenâs participation in the movement to improve science
Science is undergoing rapid change with the movement to improve science focused largely on reproducibility/replicability and open science practices. This moment of changeâin which science turns inward to examine its methods and practicesâprovides an opportunity to address its historic lack of diversity and noninclusive culture. Through network modeling and semantic analysis, we provide an initial exploration of the structure, cultural frames, and womenâs participation in the open science and reproducibility literatures (n = 2,926 articles and conference proceedings). Network analyses suggest that the open science and reproducibility literatures are emerging relatively independently of each other, sharing few common papers or authors. We next examine whether the literatures differentially incorporate collaborative, prosocial ideals that are known to engage members of underrepresented groups more than independent, winner-takes-all approaches. We find that open science has a more connected, collaborative structure than does reproducibility. Semantic analyses of paper abstracts reveal that these literatures have adopted different cultural frames: open science includes more explicitly communal and prosocial language than does reproducibility. Finally, consistent with literature suggesting the diversity benefits of communal and prosocial purposes, we find that women publish more frequently in high-status author positions (first or last) within open science (vs. reproducibility). Furthermore, this finding is further patterned by team size and time. Women are more represented in larger teams within reproducibility, and womenâs participation is increasing in open science over time and decreasing in reproducibility. We conclude with actionable suggestions for cultivating a more prosocial and diverse culture of science
Probabilistic Computation in Human Perception under Variability in Encoding Precision
A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brainâs remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making
Building connectomes using diffusion MRI: why, how and but
Why has diffusion MRI become a principal modality for mapping connectomes in vivo? How do different image acquisition parameters, fiber tracking algorithms and other methodological choices affect connectome estimation? What are the main factors that dictate the success and failure of connectome reconstruction? These are some of the key questions that we aim to address in this review. We provide an overview of the key methods that can be used to estimate the nodes and edges of macroscale connectomes, and we discuss open problems and inherent limitations. We argue that diffusion MRI-based connectome mapping methods are still in their infancy and caution against blind application of deep white matter tractography due to the challenges inherent to connectome reconstruction. We review a number of studies that provide evidence of useful microstructural and network properties that can be extracted in various independent and biologically-relevant contexts. Finally, we highlight some of the key deficiencies of current macroscale connectome mapping methodologies and motivate future developments
brainlife.io: a decentralized and open-source cloud platform to support neuroscience research
Neuroscience is advancing standardization and tool development to support rigor and transparency. Consequently, data pipeline complexity has increased, hindering FAIR (findable, accessible, interoperable and reusable) access. brainlife.io was developed to democratize neuroimaging research. The platform provides data standardization, management, visualization and processing and automatically tracks the provenance history of thousands of data objects. Here, brainlife.io is described and evaluated for validity, reliability, reproducibility, replicability and scientific utility using four data modalities and 3,200 participants
Recommended from our members
Developing and Using Reference Datasets to Support Reproducible, Big Data Neuroscience
"Advancements in data collection in neuroimaging have ushered in an âAge of Big Dataâ in neuroscience(Fair et al., 2021; Poldrack & Gorgolewski, 2014; Webb-Vargas et al., 2017). With the growing size of neuroimaging datasets(Alexander et al., 2017; Casey et al., 2018; Sudlow et al., 2015), and the continued persistence of the Replicability Crisis in neuroscience(Tackett et al., 2019), data quality assurance becomes a challenge requiring new approaches for quality assurance at scale.
The traditional methods for QA do not scale well. More specifically, the gold standard for QA requires a combination of visual inspection of each individual data derivative, and complex reports that require expertise and time (fMRIPrep, Freesurfer, QSIPrep, Fibr)(Cieslak et al., 2021; Dale et al., 1999; Esteban et al., 2019; Jenkinson et al., 2012; Richie-Halford et al., 2022). Some attempts have been made to approach this at scale(Richie-Halford et al., 2022), however few approaches exist to bridge the gap between community-based visual inspection and expertise-required technical reports.
To address this gap, we propose a data-driven approach that uses the natural statistics and variability of large datasets and provides a reference whose variability in value can be compared against. To do this, we processed over 2,000 individual brains from 3 large-scale, open datasets using TACC supercomputers (i.e. PING(Jernigan et al., 2016), HCP(Van Essen et al., 2012), CAMCAN(Shafto et al., 2014)), across multiple imaging modalities and statistical brain properties. For each brain property and dataset, distributions were computed, statistical outliers were removed, and the cleaned distributions were released via brainlife.io(Avesani et al., 2019). The goal of this work is to provide the greater community with tools to perform efficient, automated, data-drive quality assurance, ultimately allowing for the scaling up and increasing of value of large scale datasets processed on supercomputers."Texas Advanced Computing Center (TACC
- âŠ