657 research outputs found

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Simulation Needs Efficient Algorithms

    Get PDF

    Summary Conclusions: Computation of Minimum Volume Covering Ellipsoids*

    Get PDF
    We present a practical algorithm for computing the minimum volume n-dimensional ellipsoid that must contain m given points a₁,..., am â Rn. This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30,000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer.Singapore-MIT Alliance (SMA

    Computation of Minimum Volume Covering Ellipsoids

    Get PDF
    We present a practical algorithm for computing the minimum volume n-dimensional ellipsoid that must contain m given points al,...,am C Rn . This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30, 000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer

    Algebraic combinatorial optimization on the degree of determinants of noncommutative symbolic matrices

    Full text link
    We address the computation of the degrees of minors of a noncommutative symbolic matrix of form A[c]:=k=1mAktckxk, A[c] := \sum_{k=1}^m A_k t^{c_k} x_k, where AkA_k are matrices over a field K\mathbb{K}, xix_i are noncommutative variables, ckc_k are integer weights, and tt is a commuting variable specifying the degree. This problem extends noncommutative Edmonds' problem (Ivanyos et al. 2017), and can formulate various combinatorial optimization problems. Extending the study by Hirai 2018, and Hirai, Ikeda 2022, we provide novel duality theorems and polyhedral characterization for the maximum degrees of minors of A[c]A[c] of all sizes, and develop a strongly polynomial-time algorithm for computing them. This algorithm is viewed as a unified algebraization of the classical Hungarian method for bipartite matching and the weight-splitting algorithm for linear matroid intersection. As applications, we provide polynomial-time algorithms for weighted fractional linear matroid matching and linear optimization over rank-2 Brascamp-Lieb polytopes

    Budget Feasible Mechanisms for Experimental Design

    Full text link
    In the classical experimental design setting, an experimenter E has access to a population of nn potential experiment subjects i{1,...,n}i\in \{1,...,n\}, each associated with a vector of features xiRdx_i\in R^d. Conducting an experiment with subject ii reveals an unknown value yiRy_i\in R to E. E typically assumes some hypothetical relationship between xix_i's and yiy_i's, e.g., yiβxiy_i \approx \beta x_i, and estimates β\beta from experiments, e.g., through linear regression. As a proxy for various practical constraints, E may select only a subset of subjects on which to conduct the experiment. We initiate the study of budgeted mechanisms for experimental design. In this setting, E has a budget BB. Each subject ii declares an associated cost ci>0c_i >0 to be part of the experiment, and must be paid at least her cost. In particular, the Experimental Design Problem (EDP) is to find a set SS of subjects for the experiment that maximizes V(S) = \log\det(I_d+\sum_{i\in S}x_i\T{x_i}) under the constraint iSciB\sum_{i\in S}c_i\leq B; our objective function corresponds to the information gain in parameter β\beta that is learned through linear regression methods, and is related to the so-called DD-optimality criterion. Further, the subjects are strategic and may lie about their costs. We present a deterministic, polynomial time, budget feasible mechanism scheme, that is approximately truthful and yields a constant factor approximation to EDP. In particular, for any small δ>0\delta > 0 and ϵ>0\epsilon > 0, we can construct a (12.98, ϵ\epsilon)-approximate mechanism that is δ\delta-truthful and runs in polynomial time in both nn and loglogBϵδ\log\log\frac{B}{\epsilon\delta}. We also establish that no truthful, budget-feasible algorithms is possible within a factor 2 approximation, and show how to generalize our approach to a wide class of learning problems, beyond linear regression
    corecore