166 research outputs found

    Emergent Properties of Interacting Populations of Spiking Neurons

    Get PDF
    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations

    Dynamical systems applied to consciousness and brain rhythms in a neural network

    Get PDF
    This thesis applies the great advances of modern dynamical systems theory (DST) to consciousness. Consciousness, or subjective experience, is faced here in two different ways: from the global dynamics of the human brain and from the integrated information theory (IIT), one of the currently most prestigious theories on consciousness. Before that, a study of a numerical simulation of a network of individual neurons justifies the use of the Lotka-Volterra model for neurons assemblies in both applications. All these proposals are developed following this scheme: • First, summarizing the structure, methods and goal of the thesis. • Second, introducing a general background in neuroscience and the global dynamics of the human brain to better understand those applications. • Third, conducting a study of a numerically simulated network of neurons. This network, which displays brain rhythms, can be employed, among other objectives, to justify the use of the Lotka-Volterra model for applications. • Fourth, summarizing concepts from the mathematical DST such as the global attractor and its informational structure, in addition to its particularization to a Lotka-Volterra system. • Fifth, introducing the new mathematical concepts of model transform and instantaneous parameters that allow the application of simple mathematical models such as Lotka-Volterra to complex empirical systems as the human brain. • Sixth, using the model transform, and specifically the Lotka-Volterra transform, to calculate global attractors and informational structures in global dynamics of the human brain. • Seventh, knowing the probably most prestigious theory on consciousness, the IIT developed by G. Tononi. • Eighth, using informational structures to develop a continuous version of IIT. And ninth, establishing some final conclusions and commenting on new open questions from this work. These nine points of this scheme correspond to the nine chapters of this thesis

    Exponential multistability of memristive Cohen-Grossberg neural networks with stochastic parameter perturbations

    Get PDF
    © 2020 Elsevier Ltd. All rights reserved. This manuscript is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence http://creativecommons.org/licenses/by-nc-nd/4.0/.Due to instability being induced easily by parameter disturbances of network systems, this paper investigates the multistability of memristive Cohen-Grossberg neural networks (MCGNNs) under stochastic parameter perturbations. It is demonstrated that stable equilibrium points of MCGNNs can be flexibly located in the odd-sequence or even-sequence regions. Some sufficient conditions are derived to ensure the exponential multistability of MCGNNs under parameter perturbations. It is found that there exist at least (w+2) l (or (w+1) l) exponentially stable equilibrium points in the odd-sequence (or the even-sequence) regions. In the paper, two numerical examples are given to verify the correctness and effectiveness of the obtained results.Peer reviewe

    Informational structures and informational fields as a prototype for the description of postulates of the integrated information theory

    Get PDF
    Informational Structures (IS) and Informational Fields (IF) have been recently introduced to deal with a continuous dynamical systems-based approach to Integrated Information Theory (IIT). IS and IF contain all the geometrical and topological constraints in the phase space. This allows one to characterize all the past and future dynamical scenarios for a system in any particular state. In this paper, we develop further steps in this direction, describing a proper continuous framework for an abstract formulation, which could serve as a prototype of the IIT postulates.National Science Center of PolandUMO-2016/22/A/ST1/00077Junta de AndalucíaMinisterio de Economia, Industria y Competitividad (MINECO). Españ

    Time delays and stimulus-dependent pattern formation in periodic environments in isolated neurons

    Get PDF
    The dynamical characteristics of a single isolated Hopfield-type neuron with dissipation and time-delayed self-interaction under periodic stimuli are studied. Sufficient conditions for the hetero-associative stable encoding of periodic external stimuli are obtained. Both discrete and continuously distributed delays are included

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA
    corecore