465 research outputs found

    Macroscopic behavior of populations of quadratic integrate-and-fire neurons subject to non-Gaussian white noise

    Full text link
    We study microscopic dynamics of populations of quadratic integrate-and-fire neurons subject to non-Gaussian noises; we argue that these noises must be alpha-stable whenever they are delta-correlated (white). For the case of additive-in-voltage noise, we derive the governing equation of the dynamics of the characteristic function of the membrane voltage distribution and construct a linear-in-noise perturbation theory. Specifically for the recurrent network with global synaptic coupling, we theoretically calculate the observables: population-mean membrane voltage and firing rate. The theoretical results are underpinned by the results of numerical simulation for homogeneous and heterogeneous populations. The possibility of the generalization of the pseudocumulant approach to the case of a fractional α\alpha is examined for both irrational and fractional rational α\alpha. This examination seemingly suggests the pseudocumulant approach or its modifications to be employable only for the integer values of α=1\alpha=1 (Cauchy noise) and 2 (Gaussian noise) within the physically meaningful range (0;2]. Remarkably, the analysis for fractional α\alpha indirectly revealed that, for the Gaussian noise, the minimal asymptotically rigorous model reduction must involve three pseudocumulants and the two-pseudocumulant model reduction is an artificial approximation. This explains a surprising gain of accuracy for the three-pseudocumulant models as compared to the the two-pseudocumulant ones reported in the literature.Comment: 16 pages, 4 figure

    Rhythmogenic and Premotor Functions of Dbx1 Interneurons in the Pre-Bötzinger Complex and Reticular Formation: Modeling and Simulation Studies

    Get PDF
    Breathing in mammals depends on rhythms that originate from the preBötzinger complex (preBötC) of the ventral medulla and a network of brainstem and spinal premotor neurons. The rhythm-generating core of the preBötC, as well as some premotor circuits, consists of interneurons derived from Dbx1-expressing precursors but the structure and function of these networks remain incompletely understood. We previously developed a cell-specific detection and laser ablation system to interrogate respiratory network structure and function in a slice model of breathing that retains the preBötC, premotor circuits, and the respiratory related hypoglossal (XII) motor nucleus such that in spontaneously rhythmic slices, cumulative ablation of Dbx1 preBötC neurons decreased XII motor output by half after only a few cell deletions, and then decelerated and terminated rhythmic function altogether as the tally increased. In contrast, cumulatively deleting Dbx1 premotor neurons decreased XII motor output monotonically, but did not affect frequency nor stop functionality regardless of the ablation tally. This dissertation presents several network modeling and cellular modeling studies that would further our understanding of how respiratory rhythm is generated and transmitted to the XII motor nucleus. First, we propose that cumulative deletions of Dbx1 preBötC neurons preclude rhythm by diminishing the amount of excitatory inward current or disturbing the process of recurrent excitation rather than structurally breaking down the topological network. Second, we establish a feasible configuration for neural circuits including an ErdƑs-RĂ©nyi preBötC network and a small-world reticular premotor network with interconnections following an anti-preferential attachment rule, which is the only configuration that produces consistent outcomes with previous experimental benchmarks. Furthermore, since the performance of neuronal network simulations is, to some extent, affected by the nature of the cellular model, we aim to develop a more realistic cellular model based on the one we adopted in previous network studies, which would account for some recent experimental findings on rhythmogenic preBötC neurons

    Real-time Artificial Intelligence for Accelerator Control: A Study at the Fermilab Booster

    Full text link
    We describe a method for precisely regulating the gradient magnet power supply at the Fermilab Booster accelerator complex using a neural network trained via reinforcement learning. We demonstrate preliminary results by training a surrogate machine-learning model on real accelerator data to emulate the Booster environment, and using this surrogate model in turn to train the neural network for its regulation task. We additionally show how the neural networks to be deployed for control purposes may be compiled to execute on field-programmable gate arrays. This capability is important for operational stability in complicated environments such as an accelerator facility.Comment: 16 pages, 10 figures. Submitted to Physical Review Accelerators and Beams. For associated dataset and data sheet see http://doi.org/10.5281/zenodo.408898

    Symmetry in Chaotic Systems and Circuits

    Get PDF
    Symmetry can play an important role in the field of nonlinear systems and especially in the design of nonlinear circuits that produce chaos. Therefore, this Special Issue, titled “Symmetry in Chaotic Systems and Circuits”, presents the latest scientific advances in nonlinear chaotic systems and circuits that introduce various kinds of symmetries. Applications of chaotic systems and circuits with symmetries, or with a deliberate lack of symmetry, are also presented in this Special Issue. The volume contains 14 published papers from authors around the world. This reflects the high impact of this Special Issue

    Emergent Properties of Interacting Populations of Spiking Neurons

    Get PDF
    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations

    Mathemagical Schemas for Creative Psych(a)ology

    Get PDF

    Complexity, Emergent Systems and Complex Biological Systems:\ud Complex Systems Theory and Biodynamics. [Edited book by I.C. Baianu, with listed contributors (2011)]

    Get PDF
    An overview is presented of System dynamics, the study of the behaviour of complex systems, Dynamical system in mathematics Dynamic programming in computer science and control theory, Complex systems biology, Neurodynamics and Psychodynamics.\u

    Recurrent Neural Networks: Error Surface Analysis and Improved Training

    Get PDF
    Recurrent neural networks (RNNs) have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. One of the reasons that makes RNN training, especially using batch, gradient-based methods, difficult is the existence of spurious valleys in the error surface. In this work, a mathematical framework was developed to analyze the spurious valleys that appear in most practical RNN architectures, no matter their size. The insights gained from this analysis suggested a new procedure for improving the training process. The procedure uses a batch training method based on a modified version of the Levenberg-Marquardt algorithm. This new procedure mitigates the effects of spurious valleys in the error surface of RNNs. The results on a variety of test problems show that the new procedure is consistently better than existing training algorithms (both batch and stochastic) for training RNNs. Also, a framework for neural network controllers based on the model reference adaptive control (MRAC) architecture was developed. This architecture has been used before, but the difficulties in training RNNs have limited its use. The new training procedures have made MRAC more attractive. The updated MRAC framework is very flexible, and incorporates disturbance rejection, regulation and tracking. The simulation and testing results on several real systems show that this type of neural control is well-suited for highly nonlinear plants.Electrical Engineerin
    • 

    corecore