2,305 research outputs found
State estimation for discrete-time neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays
Copyright @ 2012 Springer VerlagThis paper is concerned with the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters and mixed time-delays. The parameters of the neural networks under consideration switch over time subject to a Markov chain. The networks involve both the discrete-time-varying delay and the mode-dependent distributed time-delay characterized by the upper and lower boundaries dependent on the Markov chain. By constructing novel Lyapunov-Krasovskii functionals, sufficient conditions are firstly established to guarantee the exponential stability in mean square for the addressed discrete-time neural networks with Markovian jumping parameters and mixed time-delays. Then, the state estimation problem is coped with for the same neural network where the goal is to design a desired state estimator such that the estimation error approaches zero exponentially in mean square. The derived conditions for both the stability and the existence of desired estimators are expressed in the form of matrix inequalities that can be solved by the semi-definite programme method. A numerical simulation example is exploited to demonstrate the usefulness of the main results obtained.This work was supported in part by the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60774073 and 61074129, and the Natural Science Foundation of Jiangsu Province of China under Grant BK2010313
Global exponential convergence of delayed inertial CohenāGrossberg neural networks
In this paper, the exponential convergence of delayed inertial CohenāGrossberg neural networks (CGNNs) is studied. Two methods are adopted to discuss the inertial CGNNs, one is expressed as two first-order differential equations by selecting a variable substitution, and the other does not change the order of the system based on the nonreduced-order method. By establishing appropriate Lyapunov function and using inequality techniques, sufficient conditions are obtained to ensure that the discussed model converges exponentially to a ball with the prespecified convergence rate. Finally, two simulation examples are proposed to illustrate the validity of the theorem results
Almost periodic solutions of retarded SICNNs with functional response on piecewise constant argument
We consider a new model for shunting inhibitory cellular neural networks,
retarded functional differential equations with piecewise constant argument.
The existence and exponential stability of almost periodic solutions are
investigated. An illustrative example is provided.Comment: 24 pages, 1 figur
State Estimation for Discrete-Time Fuzzy Cellular Neural Networks with Mixed Time Delays
This paper is concerned with the exponential state estimation problem for a class of discrete-time fuzzy cellular neural networks with mixed time delays. The main purpose is to estimate the neuron states through available output measurements such that the dynamics of the estimation error is globally exponentially stable. By constructing a novel Lyapunov-Krasovskii functional which contains a triple summation term, some sufficient conditions are derived to guarantee the existence of the state estimator. The linear matrix inequality approach is employed for the first time to deal with the fuzzy cellular neural networks in the discrete-time case. Compared with the present conditions in the form of M-matrix, the results obtained in this paper are less conservative and can be checked readily by the MATLAB toolbox. Finally, some numerical examples are given to demonstrate the effectiveness of the proposed results
A Nonlinear Projection Neural Network for Solving Interval Quadratic Programming Problems and Its Stability Analysis
This paper presents a nonlinear projection neural network for solving interval
quadratic programs subject to box-set constraints in engineering applications. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the interval quadratic optimization problems. By employing Lyapunov function approach, the global exponential stability of the proposed neural network is analyzed. Two illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper
Rapid-convergent nonlinear differentiator
A nonlinear differentiator being fit for rapid convergence is presented,
which is based on singular perturbation technique. The differentiator design
can not only sufficiently reduce the chattering phenomenon of derivative
estimation by introducing a continuous power function, but the dynamical
performances are also improved by adding linear correction terms to the
nonlinear ones. Moreover, strong robustness ability is obtained by integrating
nonlinear items and the linear filter. The merits of the rapid-convergent
differentiator include the excellent dynamical performances, restraining noises
sufficiently, avoiding the chattering phenomenon and being not based on system
model. The theoretical results are confirmed by computer simulations and an
experiment.Comment: 26 pages, 15 figure
- ā¦