10,264 research outputs found

    Information flow between resting state networks

    Get PDF
    The resting brain dynamics self-organizes into a finite number of correlated patterns known as resting state networks (RSNs). It is well known that techniques like independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting state magnetic resonance imaging. After haemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of Transfer Entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k = 1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k greater than one our method calculates the k-multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension-dependent, increasing from k =1 (i.e., the average voxels activity) up to a maximum occurring at k =5 to finally decay to zero for k greater than 10. This suggests that a small number of components (close to 5) is sufficient to describe the IF pattern between RSNs. Our method - addressing differences in IF between RSNs for any generic data - can be used for group comparison in health or disease. To illustrate this, we have calculated the interRSNs IF in a dataset of Alzheimer's Disease (AD) to find that the most significant differences between AD and controls occurred for k =2, in addition to AD showing increased IF w.r.t. controls.Comment: 47 pages, 5 figures, 4 tables, 3 supplementary figures. Accepted for publication in Brain Connectivity in its current for

    Synchronisation effects on the behavioural performance and information dynamics of a simulated minimally cognitive robotic agent

    Get PDF
    Oscillatory activity is ubiquitous in nervous systems, with solid evidence that synchronisation mechanisms underpin cognitive processes. Nevertheless, its informational content and relationship with behaviour are still to be fully understood. In addition, cognitive systems cannot be properly appreciated without taking into account brain–body– environment interactions. In this paper, we developed a model based on the Kuramoto Model of coupled phase oscillators to explore the role of neural synchronisation in the performance of a simulated robotic agent in two different minimally cognitive tasks. We show that there is a statistically significant difference in performance and evolvability depending on the synchronisation regime of the network. In both tasks, a combination of information flow and dynamical analyses show that networks with a definite, but not too strong, propensity for synchronisation are more able to reconfigure, to organise themselves functionally and to adapt to different behavioural conditions. The results highlight the asymmetry of information flow and its behavioural correspondence. Importantly, it also shows that neural synchronisation dynamics, when suitably flexible and reconfigurable, can generate minimally cognitive embodied behaviour

    Information decomposition of multichannel EMG to map functional interactions in the distributed motor system

    Get PDF
    The central nervous system needs to coordinate multiple muscles during postural control. Functional coordination is established through the neural circuitry that interconnects different muscles. Here we used multivariate information decomposition of multichannel EMG acquired from 14 healthy participants during postural tasks to investigate the neural interactions between muscles. A set of information measures were estimated from an instantaneous linear regression model and a time-lagged VAR model fitted to the EMG envelopes of 36 muscles. We used network analysis to quantify the structure of functional interactions between muscles and compared them across experimental conditions. Conditional mutual information and transfer entropy revealed sparse networks dominated by local connections between muscles. We observed significant changes in muscle networks across postural tasks localized to the muscles involved in performing those tasks. Information decomposition revealed distinct patterns in task-related changes: unimanual and bimanual pointing were associated with reduced transfer to the pectoralis major muscles, but an increase in total information compared to no pointing, while postural instability resulted in increased information, information transfer and information storage in the abductor longus muscles compared to normal stability. These findings show robust patterns of directed interactions between muscles that are task-dependent and can be assessed from surface EMG recorded during static postural tasks. We discuss directed muscle networks in terms of the neural circuitry involved in generating muscle activity and suggest that task-related effects may reflect gain modulations of spinal reflex pathways

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions

    Generating functionals for autonomous latching dynamics in attractor relict networks

    Get PDF
    Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics

    Energy landscape analysis of neuroimaging data

    Get PDF
    Computational neuroscience models have been used for understanding neural dynamics in the brain and how they may be altered when physiological or other conditions change. We review and develop a data-driven approach to neuroimaging data called the energy landscape analysis. The methods are rooted in statistical physics theory, in particular the Ising model, also known as the (pairwise) maximum entropy model and Boltzmann machine. The methods have been applied to fitting electrophysiological data in neuroscience for a decade, but their use in neuroimaging data is still in its infancy. We first review the methods and discuss some algorithms and technical aspects. Then, we apply the methods to functional magnetic resonance imaging data recorded from healthy individuals to inspect the relationship between the accuracy of fitting, the size of the brain system to be analyzed, and the data length.Comment: 22 pages, 4 figures, 1 tabl
    corecore