389 research outputs found

    Universality of random matrix dynamics

    Full text link
    We discuss the concept of width-to-spacing ratio which plays the central role in the description of local spectral statistics of evolution operators in multiplicative and additive stochastic processes for random matrices. We show that the local spectral properties are highly universal and depend on a single parameter being the width-to-spacing ratio. We discuss duality between the kernel for Dysonian Brownian motion and the kernel for the Lyapunov matrix for the product of Ginibre matrices.Comment: 15 pages, 3 figure

    Neural Lyapunov Control

    Full text link
    We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging control problems.Comment: NeurIPS 201

    An application of Gaussian radial based function neural networks for the control of a nonlinear multi link robotic manipulator

    Full text link
    The theory of Gaussian radial based function neural networks is developed along with a stable adaptive weight training law founded upon Lyapunov stability theory. This is applied to the control of a nonlinear multi-linked robotic manipulator for the general case of N links. Simulations of a two link system are performed and demonstrate the derived principles

    Variable neural networks for adaptive control of nonlinear systems

    Get PDF
    This paper is concerned with the adaptive control of continuous-time nonlinear dynamical systems using neural networks. A novel neural network architecture, referred to as a variable neural network, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable neural networks, the number of basis functions can be either increased or decreased with time, according to specified design strategies, so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GRBF) variable neural network, an adaptive control scheme is presented. The location of the centers and the determination of the widths of the GRBFs in the variable neural network are analyzed to make a compromise between orthogonality and smoothness. The weight-adaptive laws developed using the Lyapunov synthesis approach guarantee the stability of the overall control scheme, even in the presence of modeling error(s). The tracking errors converge to the required accuracy through the adaptive control algorithm derived by combining the variable neural network and Lyapunov synthesis techniques. The operation of an adaptive control scheme using the variable neural network is demonstrated using two simulated example

    STABLE ADAPTIVE CONTROL FOR A CLASS OF NONLINEAR SYSTEMS WITHOUT USE OF A SUPERVISORY TERM IN THE CONTROL LAW

    Get PDF
    In this paper, a direct adaptive control scheme for a class of nonlinear systems is proposed. The architecture employs a Gaussian radial basis function (RBF) network to construct an adaptive controller. The parameters of the adaptive controller are adapted and changed according to a law derived using Lyapunov stability theory. The centres of the RBF network are adapted on line using the k-means algorithm. Asymptotic Lyapunov stability is established without the use of a supervisory (compensatory) term in the control law and with the tracking errors converging to a neighbourhood of the origin. Finally, a simulation is provided to explore the feasibility of the proposed neuronal controller design method

    Learning the Structure of Deep Sparse Graphical Models

    Full text link
    Deep belief networks are a powerful way to model complex probability distributions. However, learning the structure of a belief network, particularly one with hidden units, is difficult. The Indian buffet process has been used as a nonparametric Bayesian prior on the directed structure of a belief network with a single infinitely wide hidden layer. In this paper, we introduce the cascading Indian buffet process (CIBP), which provides a nonparametric prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network so each unit can additionally vary its behavior between discrete and continuous representations. We provide Markov chain Monte Carlo algorithms for inference in these belief networks and explore the structures learned on several image data sets.Comment: 20 pages, 6 figures, AISTATS 2010, Revise
    • …
    corecore