1,678 research outputs found

    Neurodynamic Optimization: towards Nonconvexity

    Get PDF

    Novel Lagrange sense exponential stability criteria for time-delayed stochastic Cohen–Grossberg neural networks with Markovian jump parameters: A graph-theoretic approach

    Get PDF
    This paper concerns the issues of exponential stability in Lagrange sense for a class of stochastic Cohen–Grossberg neural networks (SCGNNs) with Markovian jump and mixed time delay effects. A systematic approach of constructing a global Lyapunov function for SCGNNs with mixed time delays and Markovian jumping is provided by applying the association of Lyapunov method and graph theory results. Moreover, by using some inequality techniques in Lyapunov-type and coefficient-type theorems we attain two kinds of sufficient conditions to ensure the global exponential stability (GES) through Lagrange sense for the addressed SCGNNs. Ultimately, some examples with numerical simulations are given to demonstrate the effectiveness of the acquired result

    pth moment exponential stability of stochastic fuzzy Cohen–Grossberg neural networks with discrete and distributed delays

    Get PDF
    In this paper, stochastic fuzzy Cohen–Grossberg neural networks with discrete and distributed delays are investigated. By using Lyapunov function and the Ito differential formula, some sufficient conditions for the pth moment exponential stability of such stochastic fuzzy Cohen–Grossberg neural networks with discrete and distributed delays are established. An example is given to illustrate the feasibility of our main theoretical findings. Finally, the paper ends with a brief conclusion. Methodology and achieved results is to be presented

    Global exponential convergence of delayed inertial Cohen–Grossberg neural networks

    Get PDF
    In this paper, the exponential convergence of delayed inertial Cohen–Grossberg neural networks (CGNNs) is studied. Two methods are adopted to discuss the inertial CGNNs, one is expressed as two first-order differential equations by selecting a variable substitution, and the other does not change the order of the system based on the nonreduced-order method. By establishing appropriate Lyapunov function and using inequality techniques, sufficient conditions are obtained to ensure that the discussed model converges exponentially to a ball with the prespecified convergence rate. Finally, two simulation examples are proposed to illustrate the validity of the theorem results

    An Augmented Lagrangian Neural Network for the Fixed-Time Solution of Linear Programming

    Get PDF
    In this paper, a recurrent neural network is proposed using the augmented Lagrangian method for solving linear programming problems. The design of this neural network is based on the Karush-Kuhn-Tucker (KKT) optimality conditions and on a function that guarantees fixed-time convergence. With this aim, the use of slack variables allows transforming the initial linear programming problem into an equivalent one which only contains equality constraints. Posteriorly, the activation functions of the neural network are designed as fixed time controllers to meet KKT optimality conditions. Simulations results in an academic example and an application example show the effectiveness of the neural network

    A recurrent neural network applied to optimal motion control of mobile robots with physical constraints

    Get PDF
    Conventional solutions, such as the conventional recurrent neural network (CRNN) and gradient recurrent neural network (GRNN), for the motion control of mobile robots in the unified framework of recurrent neural network (RNN) are difficult to simultaneously consider both criteria optimization and physical constraints. The limitation of the RNN solution may lead to the damage of mobile robots for exceeding physical constraints during the task execution. To overcome this limitation, this paper proposes a novel inequality and equality constrained optimization RNN (IECORNN) to handle the motion control of mobile robots. Firstly, the real-time motion control problem with both criteria optimization and physical constraints is skillfully converted to a real-time equality system by leveraging the Lagrange multiplier rule. Then, the detailed design process for the proposed IECORNN is presented together with the neural network architecture developed. Afterward, theoretical analyses on the motion control problem conversion equivalence, global stability, and exponential convergence property are rigorously provided. Finally, two numerical simulation verifications and extensive comparisons with other existing RNNs, e.g., the CRNN and the GRNN, based on the mobile robot for two different path-tracking applications sufficiently demonstrate the effectiveness and superiority of the proposed IECORNN for the real-time motion control of mobile robots with both criteria optimization and physical constraints. This work makes progresses in both theory as well as practice, and fills the vacancy in the unified framework of RNN in motion control of mobile robots

    Implicit regularization and momentum algorithms in nonlinear adaptive control and prediction

    Full text link
    Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We first introduce first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors - out of the many that will achieve perfect tracking or prediction - for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian to define adaptation laws with momentum applicable to linearly parameterized systems and to nonlinearly parameterized systems satisfying monotonicity or convexity requirements. We show that the Euler Lagrange equations for the Bregman Lagrangian lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.Comment: v6: cosmetic adjustments to figures 4, 5, and 6. v5: final version, accepted for publication in Neural Computation. v4: significant updates, revamped section on dynamics prediction and exploiting structure. v3: new general theorems and extensions to dynamic prediction. 37 pages, 3 figures. v2: significant updates; submission read
    • …
    corecore