41 research outputs found
Asymptotic Stability and Exponential Stability of Impulsive Delayed Hopfield Neural Networks
A criterion for the uniform asymptotic stability of the equilibrium point of impulsive delayed Hopfield
neural networks is presented by using Lyapunov functions and linear matrix inequality approach. The
criterion is a less restrictive version of a recent result. By means of constructing the extended impulsive Halanay
inequality, we also analyze the exponential stability of impulsive delayed Hopfield neural networks. Some new
sufficient conditions ensuring exponential stability of the equilibrium point of impulsive delayed Hopfield neural
networks are obtained. An example showing the effectiveness of the present criterion is given
Global exponential stability of impulsive discrete-time neural networks with time-varying delays
This paper studies the problem of global exponential stability and exponential convergence rate for a class of impulsive discrete-time neural networks with time-varying delays. Firstly, by means of the Lyapunov stability theory, some inequality analysis techniques and a discrete-time Halanay-type inequality technique, sufficient conditions for ensuring global exponential stability of discrete-time neural networks are derived, and the estimated exponential convergence rate is provided as well. The obtained results are then applied to derive global exponential stability criteria and exponential convergence rate of impulsive discrete-time neural networks with time-varying delays. Finally, numerical examples are provided to illustrate the effectiveness and usefulness of the obtained criteria
Synchronization of Clifford-valued neural networks with leakage, time-varying, and infinite distributed delays on time scales
Neural networks (NNs) with values in multidimensional domains have lately attracted the attention of researchers. Thus, complex-valued neural networks (CVNNs), quaternion-valued neural networks (QVNNs), and their generalization, Clifford-valued neural networks (ClVNNs) have been proposed in the last few years, and different dynamic properties were studied for them. On the other hand, time scale calculus has been proposed in order to jointly study the properties of continuous time and discrete time systems, or any hybrid combination between the two, and was also successfully applied to the domain of NNs. Finally, in real implementations of NNs, time delays occur inevitably. Taking all these facts into account, this paper discusses ClVNNs defined on time scales with leakage, time-varying delays, and infinite distributed delays, a type of delays which have been relatively rarely present in the existing literature. A state feedback control scheme and a generalization of the Halanay inequality for time scales are used in order to obtain sufficient conditions expressed as algebraic inequalities and as linear matrix inequalities (LMIs), using two general Lyapunov-like functions, for the exponential synchronization of the proposed model. Two numerical examples are given in order to illustrate the theoretical results
Exponential Lag Synchronization of Cohen-Grossberg Neural Networks with Discrete and Distributed Delays on Time Scales
In this article, we investigate exponential lag synchronization results for
the Cohen-Grossberg neural networks (C-GNNs) with discrete and distributed
delays on an arbitrary time domain by applying feedback control. We formulate
the problem by using the time scales theory so that the results can be applied
to any uniform or non-uniform time domains. Also, we provide a comparison of
results that shows that obtained results are unified and generalize the
existing results. Mainly, we use the unified matrix-measure theory and Halanay
inequality to establish these results. In the last section, we provide two
simulated examples for different time domains to show the effectiveness and
generality of the obtained analytical results.Comment: 20 pages, 18 figure
Boundedness and stability for CohenâGrossberg neural network with time-varying delays
AbstractIn this paper, a model is considered to describe the dynamics of CohenâGrossberg neural network with variable coefficients and time-varying delays. Uniformly ultimate boundedness and uniform boundedness are studied for the model by utilizing the Hardy inequality. Combining with the Halanay inequality and the Lyapunov functional method, some new sufficient conditions are derived for the model to be globally exponentially stable. The activation functions are not assumed to be differentiable or strictly increasing. Moreover, no assumption on the symmetry of the connection matrices is necessary. These criteria are important in signal processing and the design of networks
Almost periodic solutions of retarded SICNNs with functional response on piecewise constant argument
We consider a new model for shunting inhibitory cellular neural networks,
retarded functional differential equations with piecewise constant argument.
The existence and exponential stability of almost periodic solutions are
investigated. An illustrative example is provided.Comment: 24 pages, 1 figur
Projective synchronization analysis for BAM neural networks with time-varying delay via novel control
In this paper, the projective synchronization of BAM neural networks with time-varying delays is studied. Firstly, a type of novel adaptive controller is introduced for the considered neural networks, which can achieve projective synchronization. Then, based on the adaptive controller, some novel and useful conditions are obtained to ensure the projective synchronization of considered neural networks. To our knowledge, different from other forms of synchronization, projective synchronization is more suitable to clearly represent the nonlinear systemsâ fragile nature. Besides, we solve the projective synchronization problem between two different chaotic BAM neural networks, while most of the existing works only concerned with the projective synchronization chaotic systems with the same topologies. Compared with the controllers in previous papers, the designed controllers in this paper do not require any activation functions during the application process. Finally, an example is provided to show the effectiveness of the theoretical results
Recent Advances and Applications of Fractional-Order Neural Networks
This paper focuses on the growth, development, and future of various forms of fractional-order neural networks. Multiple advances in structure, learning algorithms, and methods have been critically investigated and summarized. This also includes the recent trends in the dynamics of various fractional-order neural networks. The multiple forms of fractional-order neural networks considered in this study are Hopfield, cellular, memristive, complex, and quaternion-valued based networks. Further, the application of fractional-order neural networks in various computational fields such as system identification, control, optimization, and stability have been critically analyzed and discussed