704 research outputs found

    Extracting low-dimensional dynamics from multiple large-scale neural population recordings by learning to predict correlations

    Full text link
    A powerful approach for understanding neural population dynamics is to extract low-dimensional trajectories from population recordings using dimensionality reduction methods. Current approaches for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements. We propose an approach for extracting low-dimensional dynamics from multiple, sequential recordings. Our algorithm scales to data comprising millions of observed dimensions, making it possible to access dynamics distributed across large populations or multiple brain areas. Building on subspace-identification approaches for dynamical systems, we perform parameter estimation by minimizing a moment-matching objective using a scalable stochastic gradient descent algorithm: The model is optimized to predict temporal covariations across neurons and across time. We show how this approach naturally handles missing data and multiple partial recordings, and can identify dynamics and predict correlations even in the presence of severe subsampling and small overlap between recordings. We demonstrate the effectiveness of the approach both on simulated data and a whole-brain larval zebrafish imaging dataset

    Discovering Common Change-Point Patterns in Functional Connectivity Across Subjects

    Full text link
    This paper studies change-points in human brain functional connectivity (FC) and seeks patterns that are common across multiple subjects under identical external stimulus. FC relates to the similarity of fMRI responses across different brain regions when the brain is simply resting or performing a task. While the dynamic nature of FC is well accepted, this paper develops a formal statistical test for finding {\it change-points} in times series associated with FC. It represents short-term connectivity by a symmetric positive-definite matrix, and uses a Riemannian metric on this space to develop a graphical method for detecting change-points in a time series of such matrices. It also provides a graphical representation of estimated FC for stationary subintervals in between the detected change-points. Furthermore, it uses a temporal alignment of the test statistic, viewed as a real-valued function over time, to remove inter-subject variability and to discover common change-point patterns across subjects. This method is illustrated using data from Human Connectome Project (HCP) database for multiple subjects and tasks

    Controllability Analysis of Functional Brain Networks

    Full text link
    Network control theory has recently emerged as a promising approach for understanding brain function and dynamics. By operationalizing notions of control theory for brain networks, it offers a fundamental explanation for how brain dynamics may be regulated by structural connectivity. While powerful, the approach does not currently consider other non-structural explanations of brain dynamics. Here we extend the analysis of network controllability by formalizing the evolution of neural signals as a function of effective inter-regional coupling and pairwise signal covariance. We find that functional controllability characterizes a region's impact on the capacity for the whole system to shift between states, and significantly predicts individual difference in performance on cognitively demanding tasks including those task working memory, language, and emotional intelligence. When comparing measurements from functional and structural controllability, we observed consistent relations between average and modal controllability, supporting prior work. In the same comparison, we also observed distinct relations between controllability and synchronizability, reflecting the additional information obtained from functional signals. Our work suggests that network control theory can serve as a systematic analysis tool to understand the energetics of brain state transitions, associated cognitive processes, and subsequent behaviors

    Unveiling the intrinsic dynamics of biological and artificial neural networks: from criticality to optimal representations

    Full text link
    Deciphering the underpinnings of the dynamical processes leading to information transmission, processing, and storing in the brain is a crucial challenge in neuroscience. An inspiring but speculative theoretical idea is that such dynamics should operate at the brink of a phase transition, i.e., at the edge between different collective phases, to entail a rich dynamical repertoire and optimize functional capabilities. In recent years, research guided by the advent of high-throughput data and new theoretical developments has contributed to making a quantitative validation of such a hypothesis. Here we review recent advances in this field, stressing our contributions. In particular, we use data from thousands of individually recorded neurons in the mouse brain and tools such as a phenomenological renormalization group analysis, theory of disordered systems, and random matrix theory. These combined approaches provide novel evidence of quasi-universal scaling and near-critical behavior emerging in different brain regions. Moreover, we design artificial neural networks under the reservoir-computing paradigm and show that their internal dynamical states become near critical when we tune the networks for optimal performance. These results not only open new perspectives for understanding the ultimate principles guiding brain function but also towards the development of brain-inspired, neuromorphic computation

    Understanding the Role of Dynamics in Brain Networks: Methods, Theory and Application

    Get PDF
    The brain is inherently a dynamical system whose networks interact at multiple spatial and temporal scales. Understanding the functional role of these dynamic interactions is a fundamental question in neuroscience. In this research, we approach this question through the development of new methods for characterizing brain dynamics from real data and new theories for linking dynamics to function. We perform our study at two scales: macro (at the level of brain regions) and micro (at the level of individual neurons). In the first part of this dissertation, we develop methods to identify the underlying dynamics at macro-scale that govern brain networks during states of health and disease in humans. First, we establish an optimization framework to actively probe connections in brain networks when the underlying network dynamics are changing over time. Then, we extend this framework to develop a data-driven approach for analyzing neurophysiological recordings without active stimulation, to describe the spatiotemporal structure of neural activity at different timescales. The overall goal is to detect how the dynamics of brain networks may change within and between particular cognitive states. We present the efficacy of this approach in characterizing spatiotemporal motifs of correlated neural activity during the transition from wakefulness to general anesthesia in functional magnetic resonance imaging (fMRI) data. Moreover, we demonstrate how such an approach can be utilized to construct an automatic classifier for detecting different levels of coma in electroencephalogram (EEG) data. In the second part, we study how ongoing function can constraint dynamics at micro-scale in recurrent neural networks, with particular application to sensory systems. Specifically, we develop theoretical conditions in a linear recurrent network in the presence of both disturbance and noise for exact and stable recovery of dynamic sparse stimuli applied to the network. We show how network dynamics can affect the decoding performance in such systems. Moreover, we formulate the problem of efficient encoding of an afferent input and its history in a nonlinear recurrent network. We show that a linear neural network architecture with a thresholding activation function is emergent if we assume that neurons optimize their activity based on a particular cost function. Such an architecture can enable the production of lightweight, history-sensitive encoding schemes

    Information-based Analysis and Control of Recurrent Linear Networks and Recurrent Networks with Sigmoidal Nonlinearities

    Get PDF
    Linear dynamical models have served as an analytically tractable approximation for a variety of natural and engineered systems. Recently, such models have been used to describe high-level diffusive interactions in the activation of complex networks, including those in the brain. In this regard, classical tools from control theory, including controllability analysis, have been used to assay the extent to which such networks might respond to their afferent inputs. However, for natural systems such as brain networks, it is not clear whether advantageous control properties necessarily correspond to useful functionality. That is, are systems that are highly controllable (according to certain metrics) also ones that are suited to computational goals such as representing, preserving and categorizing stimuli? This dissertation will introduce analysis methods that link the systems-theoretic properties of linear systems with informational measures that describe these functional characterizations. First, we assess sensitivity of a linear system to input orientation and novelty by deriving a measure of how networks translate input orientation differences into readable state trajectories. Next, we explore the implications of this novelty-sensitivity for endpoint-based input discrimination, wherein stimuli are decoded in terms of their induced representation in the state space. We develop a theoretical framework for the exploration of how networks utilize excess input energy to enhance orientation sensitivity (and thus enhanced discrimination ability). Next, we conduct a theoretical study to reveal how the background or default state of a network with linear dynamics allows it to best promote discrimination over a continuum of stimuli. Specifically, we derive a measure, based on the classical notion of a Fisher discriminant, quantifying the extent to which the state of a network encodes information about its afferent inputs. This measure provides an information value quantifying the knowablility of an input based on its projection onto the background state. We subsequently optimize this background state, and characterize both the optimal background and the inputs giving it rise. Finally, we extend this information-based network analysis to include networks with nonlinear dynamics--specifically, ones involving sigmoidal saturating functions. We employ a quasilinear approximation technique, novel here in terms of its multidimensionality and specific application, to approximate the nonlinear dynamics by scaling a corresponding linear system and biasing by an offset term. A Fisher information-based metric is derived for the quasilinear system, with analytical and numerical results showing that Fisher information is better for the quasilinear (hence sigmoidal) system than for an unconstrained linear system. Interestingly, this relation reverses when the noise is placed outside the sigmoid in the model, supporting conclusions extant in the literature that the relative alignment of the state and noise covariance is predictive of Fisher information. We show that there exists a clear trade-off between informational advantage, as conferred by the presence of sigmoidal nonlinearities, and speed of dynamics

    Network neuroscience and the connectomics revolution

    Full text link
    Connectomics and network neuroscience offer quantitative scientific frameworks for modeling and analyzing networks of structurally and functionally interacting neurons, neuronal populations, and macroscopic brain areas. This shift in perspective and emphasis on distributed brain function has provided fundamental insight into the role played by the brain's network architecture in cognition, disease, development, and aging. In this chapter, we review the core concepts of human connectomics at the macroscale. From the construction of networks using functional and diffusion MRI data, to their subsequent analysis using methods from network neuroscience, this review highlights key findings, commonly-used methodologies, and discusses several emerging frontiers in connectomics.Comment: 24 pages, 5 figure

    Dynamical structure in neural population activity

    Get PDF
    The question of how the collective activity of neural populations in the brain gives rise to complex behaviour is fundamental to neuroscience. At the core of this question lie considerations about how neural circuits can perform computations that enable sensory perception, motor control, and decision making. It is thought that such computations are implemented by the dynamical evolution of distributed activity in recurrent circuits. Thus, identifying and interpreting dynamical structure in neural population activity is a key challenge towards a better understanding of neural computation. In this thesis, I make several contributions in addressing this challenge. First, I develop two novel methods for neural data analysis. Both methods aim to extract trajectories of low-dimensional computational state variables directly from the unbinned spike-times of simultaneously recorded neurons on single trials. The first method separates inter-trial variability in the low-dimensional trajectory from variability in the timing of progression along its path, and thus offers a quantification of inter-trial variability in the underlying computational process. The second method simultaneously learns a low-dimensional portrait of the underlying nonlinear dynamics of the circuit, as well as the system's fixed points and locally linearised dynamics around them. This approach facilitates extracting interpretable low-dimensional hypotheses about computation directly from data. Second, I turn to the question of how low-dimensional dynamical structure may be embedded within a high-dimensional neurobiological circuit with excitatory and inhibitory cell-types. I analyse how such circuit-level features shape population activity, with particular focus on responses to targeted optogenetic perturbations of the circuit. Third, I consider the problem of implementing multiple computations in a single dynamical system. I address this in the framework of multi-task learning in recurrently connected networks and demonstrate that a careful organisation of low-dimensional, activity-defined subspaces within the network can help to avoid interference across tasks

    EEG Spatial Decoding and Classification with Logit Shrinkage Regularized Directed Information Assessment (L-SODA)

    Full text link
    There is an increasing interest in studying the neural interaction mechanisms behind patterns of cognitive brain activity. This paper proposes a new approach to infer such interaction mechanisms from electroencephalographic (EEG) data using a new estimator of directed information (DI) called logit shrinkage optimized directed information assessment (L-SODA). Unlike previous directed information measures applied to neural decoding, L-SODA uses shrinkage regularization on multinomial logistic regression to deal with the high dimensionality of multi-channel EEG signals and the small sizes of many real-world datasets. It is designed to make few a priori assumptions and can handle both non-linear and non-Gaussian flows among electrodes. Our L-SODA estimator of the DI is accompanied by robust statistical confidence intervals on the true DI that make it especially suitable for hypothesis testing on the information flow patterns. We evaluate our work in the context of two different problems where interaction localization is used to determine highly interactive areas for EEG signals spatially and temporally. First, by mapping the areas that have high DI into Brodmann area, we identify that the areas with high DI are associated with motor-related functions. We demonstrate that L-SODA provides better accuracy for neural decoding of EEG signals as compared to several state-of-the-art approaches on the Brain Computer Interface (BCI) EEG motor activity dataset. Second, the proposed L-SODA estimator is evaluated on the CHB-MIT Scalp EEG database. We demonstrate that compared to the state-of-the-art approaches, the proposed method provides better performance in detecting the epileptic seizure

    Parkinson’s Disease Brain States and Functional Connectivity: A Machine Learning Analysis of Neuroimaging Data

    Get PDF
    Treballs Finals de Grau d'Enginyeria InformĂ tica, Facultat de MatemĂ tiques, Universitat de Barcelona, Any: 2020, Director: Ignasi Cos Aguilera[en] The goal of this study is to identify and characterize brain states as a function of the motivation with which the task was performed (the presence of avatars and their skill at performing the task). To this end, we developed a series of machine learning algorithms capable of capturing differences between the EEG data recorded at each condition. We used metrics of local activity, such as electrode power, of similarity (correlation between electrodes), and of network functional connectivity (co-variance across electrodes) and use them to cluster brain states and to identify network connectivity patterns typical of each motivated state. Studies in the field of computational neuroscience involve the analysis of brain dynamics across specific brain areas to study the mechanisms underlying brain activity. This particular study aims at discovering how brain activity is affected by social motivation by computational means. To this end, we analyzed a dataset of electro-encephalographic (EEG) data recorded previously during a reward-driven decision-making experiment performed by Parkinson patients. The goal of the experiment was to select and perform a reaching movement from an origin cue to one of two possible wide rectangular targets. Reward was contingent upon arrival precision. Social motivation was manipulated by simulating avatar partners of varying skill with whom our participants played. Competition with the avatar was explicitly discouraged. Our results show that the presence of different avatars yielded distinct brain states, characterized by means of functional connectivity and local activity. Specifically, we observed that motivation related states were best identified for the highest frequency band (gamma band) of the EEGs. In summary, this study has shown that brain states can be characterized by level of motivation with a high degree of accuracy, independently of the presence of medication
    • …
    corecore