2,468 research outputs found

    Monostability and multistability of genetic regulatory networks with different types of regulation functions

    Get PDF
    The official published version of the article can be found at the link below.Monostability and multistability are proven to be two important topics in synthesis biology and system biology. In this paper, both monostability and multistability are analyzed in a unified framework by applying control theory and mathematical tools. The genetic regulatory networks (GRNs) with multiple time-varying delays and different types of regulation functions are considered. By putting forward a general sector-like regulation function and utilizing up-to-date techniques, a novel Lyapunov–Krasovskii functional is introduced for achieving delay dependence to ensure less conservatism. A new condition is then proposed for the general stability of a GRN in the form of linear matrix inequalities (LMIs) that are dependent on the upper and lower bounds of the delays. Our general stability conditions are applicable to several frequently used regulation functions. It is shown that the existing results for monostability of GRNs are special cases of our main results. Five examples are employed to illustrate the applicability and usefulness of the developed theoretical results.This work was supported in part by the Biotechnology and Biological Sciences Research Council (BBSRC) of the U.K. under Grant BB/C506264/1, the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60504008 and 60804028, the Program for New Century Excellent Talents in Universities of China, and the Alexander von Humboldt Foundation of Germany

    Asymptotic stability for neural networks with mixed time-delays: The discrete-time case

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link - Copyright 2009 Elsevier LtdThis paper is concerned with the stability analysis problem for a new class of discrete-time recurrent neural networks with mixed time-delays. The mixed time-delays that consist of both the discrete and distributed time-delays are addressed, for the first time, when analyzing the asymptotic stability for discrete-time neural networks. The activation functions are not required to be differentiable or strictly monotonic. The existence of the equilibrium point is first proved under mild conditions. By constructing a new Lyapnuov–Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the discrete-time neural networks to be globally asymptotically stable. As an extension, we further consider the stability analysis problem for the same class of neural networks but with state-dependent stochastic disturbances. All the conditions obtained are expressed in terms of LMIs whose feasibility can be easily checked by using the numerically efficient Matlab LMI Toolbox. A simulation example is presented to show the usefulness of the derived LMI-based stability condition.This work was supported in part by the Biotechnology and Biological Sciences Research Council (BBSRC) of the UK under Grants BB/C506264/1 and 100/EGM17735, the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grants GR/S27658/01 and EP/C524586/1, an International Joint Project sponsored by the Royal Society of the UK, the Natural Science Foundation of Jiangsu Province of China under Grant BK2007075, the National Natural Science Foundation of China under Grant 60774073, and the Alexander von Humboldt Foundation of Germany

    Robust synchronization of an array of coupled stochastic discrete-time delayed neural networks

    Get PDF
    Copyright [2008] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.This paper is concerned with the robust synchronization problem for an array of coupled stochastic discrete-time neural networks with time-varying delay. The individual neural network is subject to parameter uncertainty, stochastic disturbance, and time-varying delay, where the norm-bounded parameter uncertainties exist in both the state and weight matrices, the stochastic disturbance is in the form of a scalar Wiener process, and the time delay enters into the activation function. For the array of coupled neural networks, the constant coupling and delayed coupling are simultaneously considered. We aim to establish easy-to-verify conditions under which the addressed neural networks are synchronized. By using the Kronecker product as an effective tool, a linear matrix inequality (LMI) approach is developed to derive several sufficient criteria ensuring the coupled delayed neural networks to be globally, robustly, exponentially synchronized in the mean square. The LMI-based conditions obtained are dependent not only on the lower bound but also on the upper bound of the time-varying delay, and can be solved efficiently via the Matlab LMI Toolbox. Two numerical examples are given to demonstrate the usefulness of the proposed synchronization scheme

    Metastability, Criticality and Phase Transitions in brain and its Models

    Get PDF
    This essay extends the previously deposited paper "Oscillations, Metastability and Phase Transitions" to incorporate the theory of Self-organizing Criticality. The twin concepts of Scaling and Universality of the theory of nonequilibrium phase transitions is applied to the role of reentrant activity in neural circuits of cerebral cortex and subcortical neural structures

    Almost periodic solutions of retarded SICNNs with functional response on piecewise constant argument

    Get PDF
    We consider a new model for shunting inhibitory cellular neural networks, retarded functional differential equations with piecewise constant argument. The existence and exponential stability of almost periodic solutions are investigated. An illustrative example is provided.Comment: 24 pages, 1 figur

    Subthreshold dynamics of the neural membrane potential driven by stochastic synaptic input

    Get PDF
    In the cerebral cortex, neurons are subject to a continuous bombardment of synaptic inputs originating from the network's background activity. This leads to ongoing, mostly subthreshold membrane dynamics that depends on the statistics of the background activity and of the synapses made on a neuron. Subthreshold membrane polarization is, in turn, a potent modulator of neural responses. The present paper analyzes the subthreshold dynamics of the neural membrane potential driven by synaptic inputs of stationary statistics. Synaptic inputs are considered in linear interaction. The analysis identifies regimes of input statistics which give rise to stationary, fluctuating, oscillatory, and unstable dynamics. In particular, I show that (i) mere noise inputs can drive the membrane potential into sustained, quasiperiodic oscillations (noise-driven oscillations), in the absence of a stimulus-derived, intraneural, or network pacemaker; (ii) adding hyperpolarizing to depolarizing synaptic input can increase neural activity (hyperpolarization-induced activity), in the absence of hyperpolarization-activated currents

    Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons : comparison and implementation

    Get PDF
    The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.Characterizing the dynamics of biophysically modeled, large neuronal networks usually involves extensive numerical simulations. As an alternative to this expensive procedure we propose efficient models that describe the network activity in terms of a few ordinary differential equations. These systems are simple to solve and allow for convenient investigations of asynchronous, oscillatory or chaotic network states because linear stability analyses and powerful related methods are readily applicable. We build upon two research lines on which substantial efforts have been exerted in the last two decades: (i) the development of single neuron models of reduced complexity that can accurately reproduce a large repertoire of observed neuronal behavior, and (ii) different approaches to approximate the Fokker-Planck equation that represents the collective dynamics of large neuronal networks. We combine these advances and extend recent approximation methods of the latter kind to obtain spike rate models that surprisingly well reproduce the macroscopic dynamics of the underlying neuronal network. At the same time the microscopic properties are retained through the single neuron model parameters. To enable a fast adoption we have released an efficient Python implementation as open source software under a free license

    Contrastive learning and neural oscillations

    Get PDF
    The concept of Contrastive Learning (CL) is developed as a family of possible learning algorithms for neural networks. CL is an extension of Deterministic Boltzmann Machines to more general dynamical systems. During learning, the network oscillates between two phases. One phase has a teacher signal and one phase has no teacher signal. The weights are updated using a learning rule that corresponds to gradient descent on a contrast function that measures the discrepancy between the free network and the network with a teacher signal. The CL approach provides a general unified framework for developing new learning algorithms. It also shows that many different types of clamping and teacher signals are possible. Several examples are given and an analysis of the landscape of the contrast function is proposed with some relevant predictions for the CL curves. An approach that may be suitable for collective analog implementations is described. Simulation results and possible extensions are briefly discussed together with a new conjecture regarding the function of certain oscillations in the brain. In the appendix, we also examine two extensions of contrastive learning to time-dependent trajectories
    corecore