3,153 research outputs found

    Improved synchronization analysis of competitive neural networks with time-varying delays

    Get PDF
    Synchronization and control are two very important aspects of any dynamical systems. Among various kinds of nonlinear systems, competitive neural network holds a very important place due to its application in diverse fields. The model is general enough to include, as subclass, the most famous neural network models such as competitive neural networks, cellular neural networks and Hopfield neural networks. In this paper, the problem of feedback controller design to guarantee synchronization for competitive neural networks with time-varying delays is investigated. The goal of this work is to derive an existent criterion of the controller for the exponential synchronization between drive and response neutral-type competitive neural networks with time-varying delays. The method used in this brief is based on feedback control gain matrix by using the Lyapunov stability theory. The synchronization conditions are given in terms of LMIs. To the best of our knowledge, the results presented here are novel and generalize some previous results. Some numerical simulations are also represented graphically to validate the effectiveness and advantages of our theoretical results

    Anti-periodic solution for fuzzy Cohen–Grossberg neural networks with time-varying and distributed delays

    Get PDF
    In this paper, by using a continuation theorem of coincidence degree theory and a differential inequality, we establish some sufficient conditions ensuring the existence and global exponential stability of anti-periodic solutions for a class of fuzzy Cohen–Grossberg neural networks with time-varying and distributed delays. In addition, we present an illustrative example to show the feasibility of obtained results

    Exponential state estimation for competitive neural network via stochastic sampled-data control with packet losses

    Get PDF
    This paper investigates the exponential state estimation problem for competitive neural networks via stochastic sampled-data control with packet losses. Based on this strategy, a switched system model is used to describe packet dropouts for the error system. In addition, transmittal delays between neurons are also considered. Instead of the continuous measurement, the sampled measurement is used to estimate the neuron states, and a sampled-data estimator with probabilistic sampling in two sampling periods is proposed. Then the estimator is designed in terms of the solution to a set of linear matrix inequalities (LMIs), which can be solved by using available software. When the missing of control packet occurs, some sufficient conditions are obtained to guarantee that the exponentially stable of the error system by means of constructing an appropriate Lyapunov function and using the average dwell-time technique. Finally, a numerical example is given to show the effectiveness of the proposed method

    Contrastive learning and neural oscillations

    Get PDF
    The concept of Contrastive Learning (CL) is developed as a family of possible learning algorithms for neural networks. CL is an extension of Deterministic Boltzmann Machines to more general dynamical systems. During learning, the network oscillates between two phases. One phase has a teacher signal and one phase has no teacher signal. The weights are updated using a learning rule that corresponds to gradient descent on a contrast function that measures the discrepancy between the free network and the network with a teacher signal. The CL approach provides a general unified framework for developing new learning algorithms. It also shows that many different types of clamping and teacher signals are possible. Several examples are given and an analysis of the landscape of the contrast function is proposed with some relevant predictions for the CL curves. An approach that may be suitable for collective analog implementations is described. Simulation results and possible extensions are briefly discussed together with a new conjecture regarding the function of certain oscillations in the brain. In the appendix, we also examine two extensions of contrastive learning to time-dependent trajectories

    Exponential Stability Analysis of Mixed Delayed Quaternion-Valued Neural Networks Via Decomposed Approach

    Full text link
    © 2013 IEEE. With the application of quaternion in technology, quaternion-valued neural networks (QVNNs) have attracted many scholars' attention in recent years. For the existing results, dynamical behavior is an important studying side. In this paper, we mainly research the existence, uniqueness and exponential stability criteria of solutions for the QVNNs with discrete time-varying delays and distributed delays by means of generalized 2-norm. In order to avoid the noncommutativity of quaternion multiplication, the QVDNN system is firstly decomposed into four real-number systems by Hamilton rules. Then, we obtain the sufficient criteria for the existence, uniqueness and exponential stability of solutions by special Lyapunov-type functional, Cauchy convergence principle and monotone function. Furthermore, several corollaries are derived from the main results. Finally, we give one numerical example and its simulated figures to illustrate the effectiveness of the obtained conclusion

    Global Exponential Stability of Learning-Based Fuzzy Networks on Time Scales

    Get PDF
    We investigate a class of fuzzy neural networks with Hebbian-type unsupervised learning on time scales. By using Lyapunov functional method, some new sufficient conditions are derived to ensure learning dynamics and exponential stability of fuzzy networks on time scales. Our results are general and can include continuous-time learning-based fuzzy networks and corresponding discrete-time analogues. Moreover, our results reveal some new learning behavior of fuzzy synapses on time scales which are seldom discussed in the literature
    • …
    corecore