9,340 research outputs found
Neural Lyapunov Control
We propose new methods for learning control policies and neural network
Lyapunov functions for nonlinear control problems, with provable guarantee of
stability. The framework consists of a learner that attempts to find the
control and Lyapunov functions, and a falsifier that finds counterexamples to
quickly guide the learner towards solutions. The procedure terminates when no
counterexample is found by the falsifier, in which case the controlled
nonlinear system is provably stable. The approach significantly simplifies the
process of Lyapunov control design, provides end-to-end correctness guarantee,
and can obtain much larger regions of attraction than existing methods such as
LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality
solutions for challenging control problems.Comment: NeurIPS 201
Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays
Copyright [2009] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this paper, we introduce a new class of discrete-time neural networks (DNNs) with Markovian jumping parameters as well as mode-dependent mixed time delays (both discrete and distributed time delays). Specifically, the parameters of the DNNs are subject to the switching from one to another at different times according to a Markov chain, and the mixed time delays consist of both discrete and distributed delays that are dependent on the Markovian jumping mode. We first deal with the stability analysis problem of the addressed neural networks. A special inequality is developed to account for the mixed time delays in the discrete-time setting, and a novel Lyapunov-Krasovskii functional is put forward to reflect the mode-dependent time delays. Sufficient conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the stochastic stability. We then turn to the synchronization problem among an array of identical coupled Markovian jumping neural networks with mixed mode-dependent time delays. By utilizing the Lyapunov stability theory and the Kronecker product, it is shown that the addressed synchronization problem is solvable if several LMIs are feasible. Hence, different from the commonly used matrix norm theories (such as the M-matrix method), a unified LMI approach is developed to solve the stability analysis and synchronization problems of the class of neural networks under investigation, where the LMIs can be easily solved by using the available Matlab LMI toolbox. Two numerical examples are presented to illustrate the usefulness and effectiveness of the main results obtained
Small gain theorems for large scale systems and construction of ISS Lyapunov functions
We consider interconnections of n nonlinear subsystems in the input-to-state
stability (ISS) framework. For each subsystem an ISS Lyapunov function is given
that treats the other subsystems as independent inputs. A gain matrix is used
to encode the mutual dependencies of the systems in the network. Under a small
gain assumption on the monotone operator induced by the gain matrix, a locally
Lipschitz continuous ISS Lyapunov function is obtained constructively for the
entire network by appropriately scaling the individual Lyapunov functions for
the subsystems. The results are obtained in a general formulation of ISS, the
cases of summation, maximization and separation with respect to external gains
are obtained as corollaries.Comment: provisionally accepted by SIAM Journal on Control and Optimizatio
Variable neural networks for adaptive control of nonlinear systems
This paper is concerned with the adaptive control of continuous-time nonlinear dynamical systems using neural networks. A novel neural network architecture, referred to as a variable neural network, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable neural networks, the number of basis functions can be either increased or decreased with time, according to specified design strategies, so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GRBF) variable neural network, an adaptive control scheme is presented. The location of the centers and the determination of the widths of the GRBFs in the variable neural network are analyzed to make a compromise between orthogonality and smoothness. The weight-adaptive laws developed using the Lyapunov synthesis approach guarantee the stability of the overall control scheme, even in the presence of modeling error(s). The tracking errors converge to the required accuracy through the adaptive control algorithm derived by combining the variable neural network and Lyapunov synthesis techniques. The operation of an adaptive control scheme using the variable neural network is demonstrated using two simulated example
- …