4,099 research outputs found

    Existence and Stability of Standing Pulses in Neural Networks: II Stability

    Full text link
    We analyze the stability of standing pulse solutions of a neural network integro-differential equation. The network consists of a coarse-grained layer of neurons synaptically connected by lateral inhibition with a non-saturating nonlinear gain function. When two standing single-pulse solutions coexist, the small pulse is unstable, and the large pulse is stable. The large single-pulse is bistable with the ``all-off'' state. This bistable localized activity may have strong implications for the mechanism underlying working memory. We show that dimple pulses have similar stability properties to large pulses but double pulses are unstable.Comment: 31 pages, 16 figures, submitted to SIAM Journal on Applied Dynamical System

    Existence and Stability of Standing Pulses in Neural Networks : I Existence

    Full text link
    We consider the existence of standing pulse solutions of a neural network integro-differential equation. These pulses are bistable with the zero state and may be an analogue for short term memory in the brain. The network consists of a single-layer of neurons synaptically connected by lateral inhibition. Our work extends the classic Amari result by considering a non-saturating gain function. We consider a specific connectivity function where the existence conditions for single-pulses can be reduced to the solution of an algebraic system. In addition to the two localized pulse solutions found by Amari, we find that three or more pulses can coexist. We also show the existence of nonconvex ``dimpled'' pulses and double pulses. We map out the pulse shapes and maximum firing rates for different connection weights and gain functions.Comment: 31 pages, 29 figures, submitted to SIAM Journal on Applied Dynamical System

    A machine learning framework for data driven acceleration of computations of differential equations

    Full text link
    We propose a machine learning framework to accelerate numerical computations of time-dependent ODEs and PDEs. Our method is based on recasting (generalizations of) existing numerical methods as artificial neural networks, with a set of trainable parameters. These parameters are determined in an offline training process by (approximately) minimizing suitable (possibly non-convex) loss functions by (stochastic) gradient descent methods. The proposed algorithm is designed to be always consistent with the underlying differential equation. Numerical experiments involving both linear and non-linear ODE and PDE model problems demonstrate a significant gain in computational efficiency over standard numerical methods

    Local/global analysis of the stationary solutions of some neural field equations

    Full text link
    Neural or cortical fields are continuous assemblies of mesoscopic models, also called neural masses, of neural populations that are fundamental in the modeling of macroscopic parts of the brain. Neural fields are described by nonlinear integro-differential equations. The solutions of these equations represent the state of activity of these populations when submitted to inputs from neighbouring brain areas. Understanding the properties of these solutions is essential in advancing our understanding of the brain. In this paper we study the dependency of the stationary solutions of the neural fields equations with respect to the stiffness of the nonlinearity and the contrast of the external inputs. This is done by using degree theory and bifurcation theory in the context of functional, in particular infinite dimensional, spaces. The joint use of these two theories allows us to make new detailed predictions about the global and local behaviours of the solutions. We also provide a generic finite dimensional approximation of these equations which allows us to study in great details two models. The first model is a neural mass model of a cortical hypercolumn of orientation sensitive neurons, the ring model. The second model is a general neural field model where the spatial connectivity isdescribed by heterogeneous Gaussian-like functions.Comment: 38 pages, 9 figure

    Differentiable Genetic Programming

    Full text link
    We introduce the use of high order automatic differentiation, implemented via the algebra of truncated Taylor polynomials, in genetic programming. Using the Cartesian Genetic Programming encoding we obtain a high-order Taylor representation of the program output that is then used to back-propagate errors during learning. The resulting machine learning framework is called differentiable Cartesian Genetic Programming (dCGP). In the context of symbolic regression, dCGP offers a new approach to the long unsolved problem of constant representation in GP expressions. On several problems of increasing complexity we find that dCGP is able to find the exact form of the symbolic expression as well as the constants values. We also demonstrate the use of dCGP to solve a large class of differential equations and to find prime integrals of dynamical systems, presenting, in both cases, results that confirm the efficacy of our approach
    • …
    corecore