659 research outputs found

    Whole Brain Network Dynamics of Epileptic Seizures at Single Cell Resolution

    Full text link
    Epileptic seizures are characterised by abnormal brain dynamics at multiple scales, engaging single neurons, neuronal ensembles and coarse brain regions. Key to understanding the cause of such emergent population dynamics, is capturing the collective behaviour of neuronal activity at multiple brain scales. In this thesis I make use of the larval zebrafish to capture single cell neuronal activity across the whole brain during epileptic seizures. Firstly, I make use of statistical physics methods to quantify the collective behaviour of single neuron dynamics during epileptic seizures. Here, I demonstrate a population mechanism through which single neuron dynamics organise into seizures: brain dynamics deviate from a phase transition. Secondly, I make use of single neuron network models to identify the synaptic mechanisms that actually cause this shift to occur. Here, I show that the density of neuronal connections in the network is key for driving generalised seizure dynamics. Interestingly, such changes also disrupt network response properties and flexible dynamics in brain networks, thus linking microscale neuronal changes with emergent brain dysfunction during seizures. Thirdly, I make use of non-linear causal inference methods to study the nature of the underlying neuronal interactions that enable seizures to occur. Here I show that seizures are driven by high synchrony but also by highly non-linear interactions between neurons. Interestingly, these non-linear signatures are filtered out at the macroscale, and therefore may represent a neuronal signature that could be used for microscale interventional strategies. This thesis demonstrates the utility of studying multi-scale dynamics in the larval zebrafish, to link neuronal activity at the microscale with emergent properties during seizures

    Structure-function relation in a stochastic whole-brain model at criticality

    Get PDF
    Understanding the relation between brain architecture and function is one of the central issues in neuroscience nowadays. In the last few years, important efforts have been devoted to map the large-scale structure of the human cortex, the so-called "connectome". An example is the neuroanatomical connectivity matrix of the entire human brain obtained through MR diffusion tractography. Recent studies proposed a stochastic model built on top of this connectivity matrix that displays a phase-transition and is able to reproduce several aspects of brain functioning when tuned to its critical point. This master thesis is aimed to review recent results on this subject and to get a deeper insight into the model by studying the distribution of the avalanches, the dynamical range and to investigate how the use of simulated connectivity matrices affects the dynamics. Furthermore, a theoretical description of the dynamics is proposed by introducing a master equation in order to understand the nature of the phase transition and the role of stochasticity.ope

    Jensen’s force and the statistical mechanics of cortical asynchronous states

    Get PDF
    Cortical networks are shaped by the combined action of excitatory and inhibitory interactions. Among other important functions, inhibition solves the problem of the all-or-none type of response that comes about in purely excitatory networks, allowing the network to operate in regimes of moderate or low activity, between quiescent and saturated regimes. Here, we elucidate a noise-induced effect that we call “Jensen’s force” –stemming from the combined effect of excitation/inhibition balance and network sparsity– which is responsible for generating a phase of self-sustained low activity in excitationinhibition networks. The uncovered phase reproduces the main empirically-observed features of cortical networks in the so-called asynchronous state, characterized by low, un-correlated and highly-irregular activity. The parsimonious model analyzed here allows us to resolve a number of long-standing issues, such as proving that activity can be self-sustained even in the complete absence of external stimuli or driving. The simplicity of our approach allows for a deep understanding of asynchronous states and of the phase transitions to other standard phases it exhibits, opening the door to reconcile, asynchronousstate and critical-state hypotheses, putting them within a unified framework. We argue that Jensen’s forces are measurable experimentally and might be relevant in contexts beyond neuroscience.The study is supported by Fondazione Cariparma, under TeachInParma Project. MAM thanks the Spanish Ministry of Science and the Agencia Española de InvestigaciĂłn (AEI) for financial support under grant FIS2017-84256-P (European Regional Development Fund (ERDF)) as well as the Consejera de Conocimiento, InvestigaciĂłn y Universidad, Junta de Andaluca and European Regional Development Fund (ERDF), ref. SOMM17/6105/UGR. V.B. and R.B. acknowledge funding from the INFN BIOPHYS projec

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    27th annual computational neuroscience meeting (CNS*2018) : part one

    Get PDF

    The Cortex and the Critical Point

    Get PDF
    How the cerebral cortex operates near a critical phase transition point for optimum performance. Individual neurons have limited computational powers, but when they work together, it is almost like magic. Firing synchronously and then breaking off to improvise by themselves, they can be paradoxically both independent and interdependent. This happens near the critical point: when neurons are poised between a phase where activity is damped and a phase where it is amplified, where information processing is optimized, and complex emergent activity patterns arise. The claim that neurons in the cortex work best when they operate near the critical point is known as the criticality hypothesis. In this book John Beggs—one of the pioneers of this hypothesis—offers an introduction to the critical point and its relevance to the brain. Drawing on recent experimental evidence, Beggs first explains the main ideas underlying the criticality hypotheses and emergent phenomena. He then discusses the critical point and its two main consequences—first, scale-free properties that confer optimum information processing; and second, universality, or the idea that complex emergent phenomena, like that seen near the critical point, can be explained by relatively simple models that are applicable across species and scale. Finally, Beggs considers future directions for the field, including research on homeostatic regulation, quasicriticality, and the expansion of the cortex and intelligence. An appendix provides technical material; many chapters include exercises that use freely available code and data sets

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 1

    Get PDF
    • 

    corecore