225 research outputs found

    Dynamic Effective Connectivity of Inter-Areal Brain Circuits

    Get PDF
    Anatomic connections between brain areas affect information flow between neuronal circuits and the synchronization of neuronal activity. However, such structural connectivity does not coincide with effective connectivity (or, more precisely, causal connectivity), related to the elusive question “Which areas cause the present activity of which others?”. Effective connectivity is directed and depends flexibly on contexts and tasks. Here we show that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity. Integrating simulation and semi-analytic approaches, we study mesoscale network motifs of interacting cortical areas, modeled as large random networks of spiking neurons or as simple rate units. Through a causal analysis of time-series of model neural activity, we show that different dynamical states generated by a same structural connectivity motif correspond to distinct effective connectivity motifs. Such effective motifs can display a dominant directionality, due to spontaneous symmetry breaking and effective entrainment between local brain rhythms, although all connections in the considered structural motifs are reciprocal. We show then that transitions between effective connectivity configurations (like, for instance, reversal in the direction of inter-areal interactions) can be triggered reliably by brief perturbation inputs, properly timed with respect to an ongoing local oscillation, without the need for plastic synaptic changes. Finally, we analyze how the information encoded in spiking patterns of a local neuronal population is propagated across a fixed structural connectivity motif, demonstrating that changes in the active effective connectivity regulate both the efficiency and the directionality of information transfer. Previous studies stressed the role played by coherent oscillations in establishing efficient communication between distant areas. Going beyond these early proposals, we advance here that dynamic interactions between brain rhythms provide as well the basis for the self-organized control of this “communication-through-coherence”, making thus possible a fast “on-demand” reconfiguration of global information routing modalities

    Multi-level Architecture of Experience-based Neural Representations

    Get PDF

    Extraction of Network Topology From Multi-Electrode Recordings: Is there a Small-World Effect?

    Get PDF
    The simultaneous recording of the activity of many neurons poses challenges for multivariate data analysis. Here, we propose a general scheme of reconstruction of the functional network from spike train recordings. Effective, causal interactions are estimated by fitting generalized linear models on the neural responses, incorporating effects of the neurons’ self-history, of input from other neurons in the recorded network and of modulation by an external stimulus. The coupling terms arising from synaptic input can be transformed by thresholding into a binary connectivity matrix which is directed. Each link between two neurons represents a causal influence from one neuron to the other, given the observation of all other neurons from the population. The resulting graph is analyzed with respect to small-world and scale-free properties using quantitative measures for directed networks. Such graph-theoretic analyses have been performed on many complex dynamic networks, including the connectivity structure between different brain areas. Only few studies have attempted to look at the structure of cortical neural networks on the level of individual neurons. Here, using multi-electrode recordings from the visual system of the awake monkey, we find that cortical networks lack scale-free behavior, but show a small, but significant small-world structure. Assuming a simple distance-dependent probabilistic wiring between neurons, we find that this connectivity structure can account for all of the networks’ observed small-world ness. Moreover, for multi-electrode recordings the sampling of neurons is not uniform across the population. We show that the small-world-ness obtained by such a localized sub-sampling overestimates the strength of the true small-world structure of the network. This bias is likely to be present in all previous experiments based on multi-electrode recordings

    Discovering Causal Relations and Equations from Data

    Full text link
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws and principles that are invariant, robust and causal explanations of the world has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventional studies in the system under study. With the advent of big data and the use of data-driven methods, causal and equation discovery fields have grown and made progress in computer science, physics, statistics, philosophy, and many applied fields. All these domains are intertwined and can be used to discover causal relations, physical laws, and equations from observational data. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is being revolutionised with the efficient exploitation of observational data, modern machine learning algorithms and the interaction with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.Comment: 137 page

    EXTRACTING NEURONAL DYNAMICS AT HIGH SPATIOTEMPORAL RESOLUTIONS: THEORY, ALGORITHMS, AND APPLICATION

    Get PDF
    Analyses of neuronal activity have revealed that various types of neurons, both at the single-unit and population level, undergo rapid dynamic changes in their response characteristics and their connectivity patterns in order to adapt to variations in the behavioral context or stimulus condition. In addition, these dynamics often admit parsimonious representations. Despite growing advances in neural modeling and data acquisition technology, a unified signal processing framework capable of capturing the adaptivity, sparsity and statistical characteristics of neural dynamics is lacking. The objective of this dissertation is to develop such a signal processing methodology in order to gain a deeper insight into the dynamics of neuronal ensembles underlying behavior, and consequently a better understanding of how brain functions. The first part of this dissertation concerns the dynamics of stimulus-driven neuronal activity at the single-unit level. We develop a sparse adaptive filtering framework for the identification of neuronal response characteristics from spiking activity. We present a rigorous theoretical analysis of our proposed sparse adaptive filtering algorithms and characterize their performance guarantees. Application of our algorithms to experimental data provides new insights into the dynamics of attention-driven neuronal receptive field plasticity, with a substantial increase in temporal resolution. In the second part, we focus on the network-level properties of neuronal dynamics, with the goal of identifying the causal interactions within neuronal ensembles that underlie behavior. Building up on the results of the first part, we introduce a new measure of causality, namely the Adaptive Granger Causality (AGC), which allows capturing the sparsity and dynamics of the causal influences in a neuronal network in a statistically robust and computationally efficient fashion. We develop a precise statistical inference framework for the estimation of AGC from simultaneous recordings of the activity of neurons in an ensemble. Finally, in the third part we demonstrate the utility of our proposed methodologies through application to synthetic and real data. We first validate our theoretical results using comprehensive simulations, and assess the performance of the proposed methods in terms of estimation accuracy and tracking capability. These results confirm that our algorithms provide significant gains in comparison to existing techniques. Furthermore, we apply our methodology to various experimentally recorded data from electrophysiology and optical imaging: 1) Application of our methods to simultaneous spike recordings from the ferret auditory and prefrontal cortical areas reveals the dynamics of top-down and bottom-up functional interactions underlying attentive behavior at unprecedented spatiotemporal resolutions; 2) Our analyses of two-photon imaging data from the mouse auditory cortex shed light on the sparse dynamics of functional networks under both spontaneous activity and auditory tone detection tasks; and 3) Application of our methods to whole-brain light-sheet imaging data from larval zebrafish reveals unique insights into the organization of functional networks involved in visuo-motor processing

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    From specialists to generalists : inductive biases of deep learning for higher level cognition

    Full text link
    Les réseaux de neurones actuels obtiennent des résultats de pointe dans une gamme de domaines problématiques difficiles. Avec suffisamment de données et de calculs, les réseaux de neurones actuels peuvent obtenir des résultats de niveau humain sur presque toutes les tâches. En ce sens, nous avons pu former des spécialistes capables d'effectuer très bien une tâche particulière, que ce soit le jeu de Go, jouer à des jeux Atari, manipuler le cube Rubik, mettre des légendes sur des images ou dessiner des images avec des légendes. Le prochain défi pour l'IA est de concevoir des méthodes pour former des généralistes qui, lorsqu'ils sont exposés à plusieurs tâches pendant l'entraînement, peuvent s'adapter rapidement à de nouvelles tâches inconnues. Sans aucune hypothèse sur la distribution génératrice de données, il peut ne pas être possible d'obtenir une meilleure généralisation et une meilleure adaptation à de nouvelles tâches (inconnues). Les réseaux de neurones actuels obtiennent des résultats de pointe dans une gamme de domaines problématiques difficiles. Une possibilité fascinante est que l'intelligence humaine et animale puisse être expliquée par quelques principes, plutôt qu'une encyclopédie de faits. Si tel était le cas, nous pourrions plus facilement à la fois comprendre notre propre intelligence et construire des machines intelligentes. Tout comme en physique, les principes eux-mêmes ne suffiraient pas à prédire le comportement de systèmes complexes comme le cerveau, et des calculs importants pourraient être nécessaires pour simuler l'intelligence humaine. De plus, nous savons que les vrais cerveaux intègrent des connaissances a priori détaillées spécifiques à une tâche qui ne pourraient pas tenir dans une courte liste de principes simples. Nous pensons donc que cette courte liste explique plutôt la capacité des cerveaux à apprendre et à s'adapter efficacement à de nouveaux environnements, ce qui est une grande partie de ce dont nous avons besoin pour l'IA. Si cette hypothèse de simplicité des principes était correcte, cela suggérerait que l'étude du type de biais inductifs (une autre façon de penser aux principes de conception et aux a priori, dans le cas des systèmes d'apprentissage) que les humains et les animaux exploitent pourrait aider à la fois à clarifier ces principes et à fournir source d'inspiration pour la recherche en IA. L'apprentissage en profondeur exploite déjà plusieurs biais inductifs clés, et mon travail envisage une liste plus large, en se concentrant sur ceux qui concernent principalement le traitement cognitif de niveau supérieur. Mon travail se concentre sur la conception de tels modèles en y incorporant des hypothèses fortes mais générales (biais inductifs) qui permettent un raisonnement de haut niveau sur la structure du monde. Ce programme de recherche est à la fois ambitieux et pratique, produisant des algorithmes concrets ainsi qu'une vision cohérente pour une recherche à long terme vers la généralisation dans un monde complexe et changeant.Current neural networks achieve state-of-the-art results across a range of challenging problem domains. Given enough data, and computation, current neural networks can achieve human-level results on mostly any task. In the sense, that we have been able to train \textit{specialists} that can perform a particular task really well whether it's the game of GO, playing Atari games, Rubik's cube manipulation, image caption or drawing images given captions. The next challenge for AI is to devise methods to train \textit{generalists} that when exposed to multiple tasks during training can quickly adapt to new unknown tasks. Without any assumptions about the data generating distribution it may not be possible to achieve better generalization and adaption to new (unknown) tasks. A fascinating possibility is that human and animal intelligence could be explained by a few principles (rather than an encyclopedia). If that was the case, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human intelligence. In addition, we know that real brains incorporate some detailed task-specific a priori knowledge which could not fit in a short list of simple principles. So we think of that short list rather as explaining the ability of brains to learn and adapt efficiently to new environments, which is a great part of what we need for AI. If that simplicity of principles hypothesis was correct it would suggest that studying the kind of inductive biases (another way to think about principles of design and priors, in the case of learning systems) that humans and animals exploit could help both clarify these principles and provide inspiration for AI research. Deep learning already exploits several key inductive biases, and my work considers a larger list, focusing on those which concern mostly higher-level cognitive processing. My work focuses on designing such models by incorporating in them strong but general assumptions (inductive biases) that enable high-level reasoning about the structure of the world. This research program is both ambitious and practical, yielding concrete algorithms as well as a cohesive vision for long-term research towards generalization in a complex and changing world

    Whole Brain Network Dynamics of Epileptic Seizures at Single Cell Resolution

    Full text link
    Epileptic seizures are characterised by abnormal brain dynamics at multiple scales, engaging single neurons, neuronal ensembles and coarse brain regions. Key to understanding the cause of such emergent population dynamics, is capturing the collective behaviour of neuronal activity at multiple brain scales. In this thesis I make use of the larval zebrafish to capture single cell neuronal activity across the whole brain during epileptic seizures. Firstly, I make use of statistical physics methods to quantify the collective behaviour of single neuron dynamics during epileptic seizures. Here, I demonstrate a population mechanism through which single neuron dynamics organise into seizures: brain dynamics deviate from a phase transition. Secondly, I make use of single neuron network models to identify the synaptic mechanisms that actually cause this shift to occur. Here, I show that the density of neuronal connections in the network is key for driving generalised seizure dynamics. Interestingly, such changes also disrupt network response properties and flexible dynamics in brain networks, thus linking microscale neuronal changes with emergent brain dysfunction during seizures. Thirdly, I make use of non-linear causal inference methods to study the nature of the underlying neuronal interactions that enable seizures to occur. Here I show that seizures are driven by high synchrony but also by highly non-linear interactions between neurons. Interestingly, these non-linear signatures are filtered out at the macroscale, and therefore may represent a neuronal signature that could be used for microscale interventional strategies. This thesis demonstrates the utility of studying multi-scale dynamics in the larval zebrafish, to link neuronal activity at the microscale with emergent properties during seizures

    Neural network mechanisms of working memory interference

    Get PDF
    [eng] Our ability to memorize is at the core of our cognitive abilities. How could we effectively make decisions without considering memories of previous experiences? Broadly, our memories can be divided in two categories: long-term and short-term memories. Sometimes, short-term memory is also called working memory and throughout this thesis I will use both terms interchangeably. As the names suggest, long-term memory is the memory you use when you remember concepts for a long time, such as your name or age, while short-term memory is the system you engage while choosing between different wines at the liquor store. As your attention jumps from one bottle to another, you need to hold in memory characteristics of previous ones to pick your favourite. By the time you pick your favourite bottle, you might remember the prices or grape types of the other bottles, but you are likely to forget all of those details an hour later at home, opening the wine in front of your guests. The overall goal of this thesis is to study the neural mechanisms that underlie working memory interference, as reflected in quantitative, systematic behavioral biases. Ultimately, the goal of each chapter, even when focused exclusively on behavioral experiments, is to nail down plausible neural mechanisms that can produce specific behavioral and neurophysiological findings. To this end, we use the bump-attractor model as our working hypothesis, with which we often contrast the synaptic working memory model. The work performed during this thesis is described here in 3 main chapters, encapsulation 5 broad goals: In Chapter 4.1, we aim at testing behavioral predictions of a bump-attractor (1) network when used to store multiple items. Moreover, we connected two of such networks aiming to model feature-binding through selectivity synchronization (2). In Chapter 4.2, we aim to clarify the mechanisms of working memory interference from previous memories (3), the so-called serial biases. These biases provide an excellent opportunity to contrast activity-based and activity-silent mechanisms because both mechanisms have been proposed to be the underlying cause of those biases. In Chapter 4.3, armed with the same techniques used to seek evidence for activity-silent mechanisms, we test a prediction of the bump-attractor model with short-term plasticity (4). Finally, in light of the results from aim 4 and simple computer simulations, we reinterpret previous studies claiming evidence for activity-silent mechanisms (5)
    corecore