37,802 research outputs found
Successful object encoding induces increased directed connectivity in presymptomatic early-onset Alzheimer's disease
Background: Recent studies report increases in neural activity in brain regions critical to episodic memory at preclinical stages of Alzheimer’s disease (AD). Although electroencephalography (EEG) is widely used in AD studies, given its non-invasiveness and low cost, there is a need to translate the findings in other neuroimaging methods to EEG.
Objective: To examine how the previous findings using functional magnetic resonance imaging (fMRI) at preclinical stage in presenilin-1 E280A mutation carriers could be assessed and extended, using EEG and a connectivity approach.
Methods: EEG signals were acquired during resting and encoding in 30 normal cognitive young subjects, from an autosomal dominant early-onset AD kindred from Antioquia, Colombia. Regions of the brain previously reported as hyperactive were used for connectivity analysis.
Results: Mutation carriers exhibited increasing connectivity at analyzed regions. Among them, the right precuneus exhibited the highest changes in connectivity.
Conclusion: Increased connectivity in hyperactive cerebral regions is seen in individuals, genetically-determined to develop AD, at preclinical stage. The use of a connectivity approach and a widely available neuroimaging technique opens the possibility to increase the use of EEG in early detection of preclinical AD.Postprint (author's final draft
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
In the past two decades, functional Magnetic Resonance Imaging has been used
to relate neuronal network activity to cognitive processing and behaviour.
Recently this approach has been augmented by algorithms that allow us to infer
causal links between component populations of neuronal networks. Multiple
inference procedures have been proposed to approach this research question but
so far, each method has limitations when it comes to establishing whole-brain
connectivity patterns. In this work, we discuss eight ways to infer causality
in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality,
Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and
Transfer Entropy. We finish with formulating some recommendations for the
future directions in this area
Measuring information-transfer delays
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics
Information flow between resting state networks
The resting brain dynamics self-organizes into a finite number of correlated
patterns known as resting state networks (RSNs). It is well known that
techniques like independent component analysis can separate the brain activity
at rest to provide such RSNs, but the specific pattern of interaction between
RSNs is not yet fully understood. To this aim, we propose here a novel method
to compute the information flow (IF) between different RSNs from resting state
magnetic resonance imaging. After haemodynamic response function blind
deconvolution of all voxel signals, and under the hypothesis that RSNs define
regions of interest, our method first uses principal component analysis to
reduce dimensionality in each RSN to next compute IF (estimated here in terms
of Transfer Entropy) between the different RSNs by systematically increasing k
(the number of principal components used in the calculation). When k = 1, this
method is equivalent to computing IF using the average of all voxel activities
in each RSN. For k greater than one our method calculates the k-multivariate IF
between the different RSNs. We find that the average IF among RSNs is
dimension-dependent, increasing from k =1 (i.e., the average voxels activity)
up to a maximum occurring at k =5 to finally decay to zero for k greater than
10. This suggests that a small number of components (close to 5) is sufficient
to describe the IF pattern between RSNs. Our method - addressing differences in
IF between RSNs for any generic data - can be used for group comparison in
health or disease. To illustrate this, we have calculated the interRSNs IF in a
dataset of Alzheimer's Disease (AD) to find that the most significant
differences between AD and controls occurred for k =2, in addition to AD
showing increased IF w.r.t. controls.Comment: 47 pages, 5 figures, 4 tables, 3 supplementary figures. Accepted for
publication in Brain Connectivity in its current for
Information decomposition of multichannel EMG to map functional interactions in the distributed motor system
The central nervous system needs to coordinate multiple muscles during postural control. Functional coordination is established through the neural circuitry that interconnects different muscles. Here we used multivariate information decomposition of multichannel EMG acquired from 14 healthy participants during postural tasks to investigate the neural interactions between muscles. A set of information measures were estimated from an instantaneous linear regression model and a time-lagged VAR model fitted to the EMG envelopes of 36 muscles. We used network analysis to quantify the structure of functional interactions between muscles and compared them across experimental conditions. Conditional mutual information and transfer entropy revealed sparse networks dominated by local connections between muscles. We observed significant changes in muscle networks across postural tasks localized to the muscles involved in performing those tasks. Information decomposition revealed distinct patterns in task-related changes: unimanual and bimanual pointing were associated with reduced transfer to the pectoralis major muscles, but an increase in total information compared to no pointing, while postural instability resulted in increased information, information transfer and information storage in the abductor longus muscles compared to normal stability. These findings show robust patterns of directed interactions between muscles that are task-dependent and can be assessed from surface EMG recorded during static postural tasks. We discuss directed muscle networks in terms of the neural circuitry involved in generating muscle activity and suggest that task-related effects may reflect gain modulations of spinal reflex pathways
Information flow through a model of the C. elegans klinotaxis circuit
Understanding how information about external stimuli is transformed into
behavior is one of the central goals of neuroscience. Here we characterize the
information flow through a complete sensorimotor circuit: from stimulus, to
sensory neurons, to interneurons, to motor neurons, to muscles, to motion.
Specifically, we apply a recently developed framework for quantifying
information flow to a previously published ensemble of models of salt
klinotaxis in the nematode worm C. elegans. The models are grounded in the
neuroanatomy and currently known neurophysiology of the worm. The unknown model
parameters were optimized to reproduce the worm's behavior. Information flow
analysis reveals several key principles underlying how the models operate: (1)
Interneuron class AIY is responsible for integrating information about positive
and negative changes in concentration, and exhibits a strong left/right
information asymmetry. (2) Gap junctions play a crucial role in the transfer of
information responsible for the information symmetry observed in interneuron
class AIZ. (3) Neck motor neuron class SMB implements an information gating
mechanism that underlies the circuit's state-dependent response. (4) The neck
carries non-uniform distribution about changes in concentration. Thus, not all
directions of movement are equally informative. Each of these findings
corresponds to an experimental prediction that could be tested in the worm to
greatly refine our understanding of the neural circuit underlying klinotaxis.
Information flow analysis also allows us to explore how information flow
relates to underlying electrophysiology. Despite large variations in the neural
parameters of individual circuits, the overall information flow architecture
circuit is remarkably consistent across the ensemble, suggesting that
information flow analysis captures general principles of operation for the
klinotaxis circuit
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
- …