57,093 research outputs found

    A review of convex approaches for control, observation and safety of linear parameter varying and Takagi-Sugeno systems

    Get PDF
    This paper provides a review about the concept of convex systems based on Takagi-Sugeno, linear parameter varying (LPV) and quasi-LPV modeling. These paradigms are capable of hiding the nonlinearities by means of an equivalent description which uses a set of linear models interpolated by appropriately defined weighing functions. Convex systems have become very popular since they allow applying extended linear techniques based on linear matrix inequalities (LMIs) to complex nonlinear systems. This survey aims at providing the reader with a significant overview of the existing LMI-based techniques for convex systems in the fields of control, observation and safety. Firstly, a detailed review of stability, feedback, tracking and model predictive control (MPC) convex controllers is considered. Secondly, the problem of state estimation is addressed through the design of proportional, proportional-integral, unknown input and descriptor observers. Finally, safety of convex systems is discussed by describing popular techniques for fault diagnosis and fault tolerant control (FTC).Peer ReviewedPostprint (published version

    Identifying and attacking the saddle point problem in high-dimensional non-convex optimization

    Full text link
    A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.Comment: The theoretical review and analysis in this article draw heavily from arXiv:1405.4604 [cs.LG

    Hermite-hadamard type inequalities for composite log-convex functions

    Get PDF
    © 2020 by authors, all rights reserved. Hermite-Hadamard type inequalities related to convex functions are widely being studied in functional analysis. Researchers have refined the convex functions as quasi-convex, h-convex, log-convex, m-convex, (α,m)-convex and many more. Subsequently, the Hermite-Hadamard type inequalities have been obtained for these refined convex functions. In this paper, we firstly review the Hermite-Hadamard type inequality for both convex functions and log-convex functions. Then, the definition of composite convex function and the Hermite-Hadamard type inequalities for composite convex functions are also reviewed. Motivated by these works, we then make some refinement to obtain the definition of composite log-convex functions, namely composite-ϕ−1 log-convex function. Some examples related to this definition such as GG-convexity and HG-convexity are given. We also define k-composite log-convexity and k-composite-ϕ−1 log-convexity. We then prove a lemma and obtain some Hermite-Hadamard type inequalities for composite log-convex functions. Two corollaries are also proved using the theorem obtained; the first one by applying the exponential function and the second one by applying the properties of k-composite log-convexity. Also, an application for GG-convex functions is given. In this application, we compare the inequalities obtained from this paper with the inequalities obtained in the previous studies. The inequalities can be applied in calculating geometric means in statistics and other fields

    Quasi-analyticity and determinacy of the full moment problem from finite to infinite dimensions

    Full text link
    This paper is aimed to show the essential role played by the theory of quasi-analytic functions in the study of the determinacy of the moment problem on finite and infinite-dimensional spaces. In particular, the quasi-analytic criterion of self-adjointness of operators and their commutativity are crucial to establish whether or not a measure is uniquely determined by its moments. Our main goal is to point out that this is a common feature of the determinacy question in both the finite and the infinite-dimensional moment problem, by reviewing some of the most known determinacy results from this perspective. We also collect some properties of independent interest concerning the characterization of quasi-analytic classes associated to log-convex sequences.Comment: 28 pages, Stochastic and Infinite Dimensional Analysis, Chapter 9, Trends in Mathematics, Birkh\"auser Basel, 201
    corecore