77 research outputs found

    Global exponential convergence of delayed inertial Cohen–Grossberg neural networks

    Get PDF
    In this paper, the exponential convergence of delayed inertial Cohen–Grossberg neural networks (CGNNs) is studied. Two methods are adopted to discuss the inertial CGNNs, one is expressed as two first-order differential equations by selecting a variable substitution, and the other does not change the order of the system based on the nonreduced-order method. By establishing appropriate Lyapunov function and using inequality techniques, sufficient conditions are obtained to ensure that the discussed model converges exponentially to a ball with the prespecified convergence rate. Finally, two simulation examples are proposed to illustrate the validity of the theorem results

    Fixed-time control of delayed neural networks with impulsive perturbations

    Get PDF
    This paper is concerned with the fixed-time stability of delayed neural networks with impulsive perturbations. By means of inequality analysis technique and Lyapunov function method, some novel fixed-time stability criteria for the addressed neural networks are derived in terms of linear matrix inequalities (LMIs). The settling time can be estimated without depending on any initial conditions but only on the designed controllers. In addition, two different controllers are designed for the impulsive delayed neural networks. Moreover, each controller involves three parts, in which each part has different role in the stabilization of the addressed neural networks. Finally, two numerical examples are provided to illustrate the effectiveness of the theoretical analysis

    Exponential Lag Synchronization of Cohen-Grossberg Neural Networks with Discrete and Distributed Delays on Time Scales

    Full text link
    In this article, we investigate exponential lag synchronization results for the Cohen-Grossberg neural networks (C-GNNs) with discrete and distributed delays on an arbitrary time domain by applying feedback control. We formulate the problem by using the time scales theory so that the results can be applied to any uniform or non-uniform time domains. Also, we provide a comparison of results that shows that obtained results are unified and generalize the existing results. Mainly, we use the unified matrix-measure theory and Halanay inequality to establish these results. In the last section, we provide two simulated examples for different time domains to show the effectiveness and generality of the obtained analytical results.Comment: 20 pages, 18 figure

    Finite-time synchronization of Markovian neural networks with proportional delays and discontinuous activations

    Get PDF
    In this paper, finite-time synchronization of neural networks (NNs) with discontinuous activation functions (DAFs), Markovian switching, and proportional delays is studied in the framework of Filippov solution. Since proportional delay is unbounded and different from infinite-time distributed delay and classical finite-time analytical techniques are not applicable anymore, new 1-norm analytical techniques are developed. Controllers with and without the sign function are designed to overcome the effects of the uncertainties induced by Filippov solutions and further synchronize the considered NNs in a finite time. By designing new Lyapunov functionals and using M-matrix method, sufficient conditions are derived to guarantee that the considered NNs realize synchronization in a settling time without introducing any free parameters. It is shown that, though the proportional delay can be unbounded, complete synchronization can still be realized, and the settling time can be explicitly estimated. Moreover, it is discovered that controllers with sign function can reduce the control gains, while controllers without the sign function can overcome chattering phenomenon. Finally, numerical simulations are given to show the effectiveness of theoretical results

    New criteria on global Mittag-Leffler synchronization for Caputo-type delayed Cohen-Grossberg Inertial Neural Networks

    Get PDF
    Our focus of this paper is on global Mittag-Leffler synchronization (GMLS) of the Caputo-type Inertial Cohen-Grossberg Neural Networks (ICGNNs) with discrete and distributed delays. This model takes into account the inertial term as well as the two types of delays, which greatly reduces the conservatism with respect to the model. A change of variables transforms the 2β 2\beta order inertial frame into β \beta order ordinary frame in order to deal with the effect of the inertial term. In the following steps, two novel types of delay controllers are designed for the purpose of reaching the GMLS. In conjunction with the novel controllers, utilizing differential mean-value theorem and inequality techniques, several criteria are derived to determine the GMLS of ICGNNs within the framework of Caputo-type derivative and calculus properties. At length, the feasibility of the results is further demonstrated by two simulation examples

    Projective synchronization analysis for BAM neural networks with time-varying delay via novel control

    Get PDF
    In this paper, the projective synchronization of BAM neural networks with time-varying delays is studied. Firstly, a type of novel adaptive controller is introduced for the considered neural networks, which can achieve projective synchronization. Then, based on the adaptive controller, some novel and useful conditions are obtained to ensure the projective synchronization of considered neural networks. To our knowledge, different from other forms of synchronization, projective synchronization is more suitable to clearly represent the nonlinear systems’ fragile nature. Besides, we solve the projective synchronization problem between two different chaotic BAM neural networks, while most of the existing works only concerned with the projective synchronization chaotic systems with the same topologies. Compared with the controllers in previous papers, the designed controllers in this paper do not require any activation functions during the application process. Finally, an example is provided to show the effectiveness of the theoretical results

    Exponential state estimation for competitive neural network via stochastic sampled-data control with packet losses

    Get PDF
    This paper investigates the exponential state estimation problem for competitive neural networks via stochastic sampled-data control with packet losses. Based on this strategy, a switched system model is used to describe packet dropouts for the error system. In addition, transmittal delays between neurons are also considered. Instead of the continuous measurement, the sampled measurement is used to estimate the neuron states, and a sampled-data estimator with probabilistic sampling in two sampling periods is proposed. Then the estimator is designed in terms of the solution to a set of linear matrix inequalities (LMIs), which can be solved by using available software. When the missing of control packet occurs, some sufficient conditions are obtained to guarantee that the exponentially stable of the error system by means of constructing an appropriate Lyapunov function and using the average dwell-time technique. Finally, a numerical example is given to show the effectiveness of the proposed method

    Contrastive learning and neural oscillations

    Get PDF
    The concept of Contrastive Learning (CL) is developed as a family of possible learning algorithms for neural networks. CL is an extension of Deterministic Boltzmann Machines to more general dynamical systems. During learning, the network oscillates between two phases. One phase has a teacher signal and one phase has no teacher signal. The weights are updated using a learning rule that corresponds to gradient descent on a contrast function that measures the discrepancy between the free network and the network with a teacher signal. The CL approach provides a general unified framework for developing new learning algorithms. It also shows that many different types of clamping and teacher signals are possible. Several examples are given and an analysis of the landscape of the contrast function is proposed with some relevant predictions for the CL curves. An approach that may be suitable for collective analog implementations is described. Simulation results and possible extensions are briefly discussed together with a new conjecture regarding the function of certain oscillations in the brain. In the appendix, we also examine two extensions of contrastive learning to time-dependent trajectories
    • …
    corecore