40 research outputs found

    Lipschitz constant estimation for 1D convolutional neural networks

    Full text link
    In this work, we propose a dissipativity-based method for Lipschitz constant estimation of 1D convolutional neural networks (CNNs). In particular, we analyze the dissipativity properties of convolutional, pooling, and fully connected layers making use of incremental quadratic constraints for nonlinear activation functions and pooling operations. The Lipschitz constant of the concatenation of these mappings is then estimated by solving a semidefinite program which we derive from dissipativity theory. To make our method as efficient as possible, we take the structure of convolutional layers into account realizing these finite impulse response filters as causal dynamical systems in state space and carrying out the dissipativity analysis for the state space realizations. The examples we provide show that our Lipschitz bounds are advantageous in terms of accuracy and scalability

    A structure exploiting SDP solver for robust controller synthesis

    Full text link
    In this paper, we revisit structure exploiting SDP solvers dedicated to the solution of Kalman-Yakubovic-Popov semi-definite programs (KYP-SDPs). These SDPs inherit their name from the KYP Lemma and they play a crucial role in e.g. robustness analysis, robust state feedback synthesis, and robust estimator synthesis for uncertain dynamical systems. Off-the-shelve SDP solvers require O(n6)O(n^6) arithmetic operations per Newton step to solve this class of problems, where nn is the state dimension of the dynamical system under consideration. Specialized solvers reduce this complexity to O(n3)O(n^3). However, existing specialized solvers do not include semi-definite constraints on the Lyapunov matrix, which is necessary for controller synthesis. In this paper, we show how to include such constraints in structure exploiting KYP-SDP solvers.Comment: Submitted to Conference on Decision and Control, copyright owned by iee

    Synthesis of constrained robust feedback policies and model predictive control

    Full text link
    In this work, we develop a method based on robust control techniques to synthesize robust time-varying state-feedback policies for finite, infinite, and receding horizon control problems subject to convex quadratic state and input constraints. To ensure constraint satisfaction of our policy, we employ (initial state)-to-peak gain techniques. Based on this idea, we formulate linear matrix inequality conditions, which are simultaneously convex in the parameters of an affine control policy, a Lyapunov function along the trajectory and multiplier variables for the uncertainties in a time-varying linear fractional transformation model. In our experiments this approach is less conservative than standard tube-based robust model predictive control methods.Comment: Extended version of a contribution to be submitted to the European Control Conference 202

    Neural network training under semidefinite constraints

    Full text link
    This paper is concerned with the training of neural networks (NNs) under semidefinite constraints, which allows for NN training with robustness and stability guarantees. In particular, we focus on Lipschitz bounds for NNs. Exploiting the banded structure of the underlying matrix constraint, we set up an efficient and scalable training scheme for NN training problems of this kind based on interior point methods. Our implementation allows to enforce Lipschitz constraints in the training of large-scale deep NNs such as Wasserstein generative adversarial networks (WGANs) via semidefinite constraints. In numerical examples, we show the superiority of our method and its applicability to WGAN training.Comment: to be published in 61st IEEE Conference on Decision and Contro

    Convolutional Neural Networks as 2-D systems

    Full text link
    This paper introduces a novel representation of convolutional Neural Networks (CNNs) in terms of 2-D dynamical systems. To this end, the usual description of convolutional layers with convolution kernels, i.e., the impulse responses of linear filters, is realized in state space as a linear time-invariant 2-D system. The overall convolutional Neural Network composed of convolutional layers and nonlinear activation functions is then viewed as a 2-D version of a Lur'e system, i.e., a linear dynamical system interconnected with static nonlinear components. One benefit of this 2-D Lur'e system perspective on CNNs is that we can use robust control theory much more efficiently for Lipschitz constant estimation than previously possible
    corecore