670 research outputs found

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Piecewise Linear Control Systems

    Get PDF
    This thesis treats analysis and design of piecewise linear control systems. Piecewise linear systems capture many of the most common nonlinearities in engineering systems, and they can also be used for approximation of other nonlinear systems. Several aspects of linear systems with quadratic constraints are generalized to piecewise linear systems with piecewise quadratic constraints. It is shown how uncertainty models for linear systems can be extended to piecewise linear systems, and how these extensions give insight into the classical trade-offs between fidelity and complexity of a model. Stability of piecewise linear systems is investigated using piecewise quadratic Lyapunov functions. Piecewise quadratic Lyapunov functions are much more powerful than the commonly used quadratic Lyapunov functions. It is shown how piecewise quadratic Lyapunov functions can be computed via convex optimization in terms of linear matrix inequalities. The computations are based on a compact parameterization of continuous piecewise quadratic functions and conditional analysis using the S-procedure. A unifying framework for computation of a variety of Lyapunov functions via convex optimization is established based on this parameterization. Systems with attractive sliding modes and systems with bounded regions of attraction are also treated. Dissipativity analysis and optimal control problems with piecewise quadratic cost functions are solved via convex optimization. The basic results are extended to fuzzy systems, hybrid systems and smooth nonlinear systems. It is shown how Lyapunov functions with a discontinuous dependence on the discrete state can be computed via convex optimization. An automated procedure for increasing the flexibility of the Lyapunov function candidate is suggested based on linear programming duality. A Matlab toolbox that implements several of the results derived in the thesis is presented

    Basic Understanding of Condensed Phases of Matter via Packing Models

    Full text link
    Packing problems have been a source of fascination for millenia and their study has produced a rich literature that spans numerous disciplines. Investigations of hard-particle packing models have provided basic insights into the structure and bulk properties of condensed phases of matter, including low-temperature states (e.g., molecular and colloidal liquids, crystals and glasses), multiphase heterogeneous media, granular media, and biological systems. The densest packings are of great interest in pure mathematics, including discrete geometry and number theory. This perspective reviews pertinent theoretical and computational literature concerning the equilibrium, metastable and nonequilibrium packings of hard-particle packings in various Euclidean space dimensions. In the case of jammed packings, emphasis will be placed on the "geometric-structure" approach, which provides a powerful and unified means to quantitatively characterize individual packings via jamming categories and "order" maps. It incorporates extremal jammed states, including the densest packings, maximally random jammed states, and lowest-density jammed structures. Packings of identical spheres, spheres with a size distribution, and nonspherical particles are also surveyed. We close this review by identifying challenges and open questions for future research.Comment: 33 pages, 20 figures, Invited "Perspective" submitted to the Journal of Chemical Physics. arXiv admin note: text overlap with arXiv:1008.298

    Set-Membership Proportionate Affine Projection Algorithms

    Get PDF
    Proportionate adaptive filters can improve the convergence speed for the identification of sparse systems as compared to their conventional counterparts. In this paper, the idea of proportionate adaptation is combined with the framework of set-membership filtering (SMF) in an attempt to derive novel computationally efficient algorithms. The resulting algorithms attain an attractive faster converge for both situations of sparse and dispersive channels while decreasing the average computational complexity due to the data discerning feature of the SMF approach. In addition, we propose a rule that allows us to automatically adjust the number of past data pairs employed in the update. This leads to a set-membership proportionate affine projection algorithm (SM-PAPA) having a variable data-reuse factor allowing a significant reduction in the overall complexity when compared with a fixed data-reuse factor. Reduced-complexity implementations of the proposed algorithms are also considered that reduce the dimensions of the matrix inversions involved in the update. Simulations show good results in terms of reduced number of updates, speed of convergence, and final mean-squared error

    Parametric uncertainty in system identification

    Get PDF

    Predictability, complexity and learning

    Full text link
    We define {\em predictive information} Ipred(T)I_{\rm pred} (T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times TT: Ipred(T)I_{\rm pred} (T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then Ipred(T)I_{\rm pred} (T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power--law growth is associated, for example, with the learning of infinite parameter (or nonparametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in learning theory and in the analysis of physical systems through statistical mechanics and dynamical systems theory. Further, in the same way that entropy provides the unique measure of available information consistent with some simple and plausible conditions, we argue that the divergent part of Ipred(T)I_{\rm pred} (T) provides the unique measure for the complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in different problems in physics, statistics, and biology.Comment: 53 pages, 3 figures, 98 references, LaTeX2

    Renewing U.S. mathematics: A plan for the 1990s

    Full text link

    Bibliographie

    Get PDF
    • …
    corecore