8,625 research outputs found

    Projective synchronization analysis for BAM neural networks with time-varying delay via novel control

    Get PDF
    In this paper, the projective synchronization of BAM neural networks with time-varying delays is studied. Firstly, a type of novel adaptive controller is introduced for the considered neural networks, which can achieve projective synchronization. Then, based on the adaptive controller, some novel and useful conditions are obtained to ensure the projective synchronization of considered neural networks. To our knowledge, different from other forms of synchronization, projective synchronization is more suitable to clearly represent the nonlinear systems’ fragile nature. Besides, we solve the projective synchronization problem between two different chaotic BAM neural networks, while most of the existing works only concerned with the projective synchronization chaotic systems with the same topologies. Compared with the controllers in previous papers, the designed controllers in this paper do not require any activation functions during the application process. Finally, an example is provided to show the effectiveness of the theoretical results

    Personal area technologies for internetworked services

    Get PDF

    A mean-field model for conductance-based networks of adaptive exponential integrate-and-fire neurons

    Full text link
    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at mesoscopic scales. Since VSDi signals report the average membrane potential, it seems natural to use a mean-field formalism to model such signals. Here, we investigate a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. The AdEx model can capture the spiking response of different cell types, such as regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the mean-field model. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model accurately predicts the response time course of the population. One notable exception was that the "tail" of the response at long times was not well predicted, because the mean-field does not include adaptation mechanisms. We conclude that the Master Equation formalism can yield mean-field models that predict well the behavior of nonlinear networks with conductance-based interactions and various electrophysiolgical properties, and should be a good candidate to model VSDi signals where both excitatory and inhibitory neurons contribute.Comment: 21 pages, 7 figure

    A switching control for finite-time synchronization of memristor-based BAM neural networks with stochastic disturbances

    Get PDF
    This paper deals with the finite-time stochastic synchronization for a class of memristorbased bidirectional associative memory neural networks (MBAMNNs) with time-varying delays and stochastic disturbances. Firstly, based on the physical property of memristor and the circuit of MBAMNNs, a MBAMNNs model with more reasonable switching conditions is established. Then, based on the theory of Filippov’s solution, by using Lyapunov–Krasovskii functionals and stochastic analysis technique, a sufficient condition is given to ensure the finite-time stochastic synchronization of MBAMNNs with a certain controller. Next, by a further discussion, an errordependent switching controller is given to shorten the stochastic settling time. Finally, numerical simulations are carried out to illustrate the effectiveness of theoretical results

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA
    • 

    corecore