2,520 research outputs found

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    Algorithms and Hardness for Robust Subspace Recovery

    Full text link
    We consider a fundamental problem in unsupervised learning called \emph{subspace recovery}: given a collection of mm points in Rn\mathbb{R}^n, if many but not necessarily all of these points are contained in a dd-dimensional subspace TT can we find it? The points contained in TT are called {\em inliers} and the remaining points are {\em outliers}. This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to {\em adversarial} outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds TT when it contains more than a dn\frac{d}{n} fraction of the points. Hence, for say d=n/2d = n/2 this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find TT when the fraction of errors is any larger, thus giving evidence that our estimator is an {\em optimal} compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.Comment: Appeared in Proceedings of COLT 201

    On the Bit Complexity of Sum-of-Squares Proofs

    Get PDF
    It has often been claimed in recent papers that one can find a degree d Sum-of-Squares proof if one exists via the Ellipsoid algorithm. In a recent paper, Ryan O\u27Donnell notes this widely quoted claim is not necessarily true. He presents an example of a polynomial system with bounded coefficients that admits low-degree proofs of non-negativity, but these proofs necessarily involve numbers with an exponential number of bits, causing the Ellipsoid algorithm to take exponential time. In this paper we obtain both positive and negative results on the bit complexity of SoS proofs. First, we propose a sufficient condition on a polynomial system that implies a bound on the coefficients in an SoS proof. We demonstrate that this sufficient condition is applicable for common use-cases of the SoS algorithm, such as Max-CSP, Balanced Separator, Max-Clique, Max-Bisection, and Unit-Vector constraints. On the negative side, O\u27Donnell asked whether every polynomial system containing Boolean constraints admits proofs of polynomial bit complexity. We answer this question in the negative, giving a counterexample system and non-negative polynomial which has degree two SoS proofs, but no SoS proof with small coefficients until degree sqrt(n)

    The Geometry of Differential Privacy: the Sparse and Approximate Cases

    Full text link
    In this work, we study trade-offs between accuracy and privacy in the context of linear queries over histograms. This is a rich class of queries that includes contingency tables and range queries, and has been a focus of a long line of work. For a set of dd linear queries over a database x∈RNx \in \R^N, we seek to find the differentially private mechanism that has the minimum mean squared error. For pure differential privacy, an O(log⁥2d)O(\log^2 d) approximation to the optimal mechanism is known. Our first contribution is to give an O(log⁥2d)O(\log^2 d) approximation guarantee for the case of (\eps,\delta)-differential privacy. Our mechanism is simple, efficient and adds correlated Gaussian noise to the answers. We prove its approximation guarantee relative to the hereditary discrepancy lower bound of Muthukrishnan and Nikolov, using tools from convex geometry. We next consider this question in the case when the number of queries exceeds the number of individuals in the database, i.e. when d>n≜∄x∄1d > n \triangleq \|x\|_1. It is known that better mechanisms exist in this setting. Our second main contribution is to give an (\eps,\delta)-differentially private mechanism which is optimal up to a \polylog(d,N) factor for any given query set AA and any given upper bound nn on ∄x∄1\|x\|_1. This approximation is achieved by coupling the Gaussian noise addition approach with a linear regression step. We give an analogous result for the \eps-differential privacy setting. We also improve on the mean squared error upper bound for answering counting queries on a database of size nn by Blum, Ligett, and Roth, and match the lower bound implied by the work of Dinur and Nissim up to logarithmic factors. The connection between hereditary discrepancy and the privacy mechanism enables us to derive the first polylogarithmic approximation to the hereditary discrepancy of a matrix AA

    Robust-to-Dynamics Optimization

    Full text link
    A robust-to-dynamics optimization (RDO) problem is an optimization problem specified by two pieces of input: (i) a mathematical program (an objective function f:Rn→Rf:\mathbb{R}^n\rightarrow\mathbb{R} and a feasible set Ω⊆Rn\Omega\subseteq\mathbb{R}^n), and (ii) a dynamical system (a map g:Rn→Rng:\mathbb{R}^n\rightarrow\mathbb{R}^n). Its goal is to minimize ff over the set S⊆Ω\mathcal{S}\subseteq\Omega of initial conditions that forever remain in Ω\Omega under gg. The focus of this paper is on the case where the mathematical program is a linear program and the dynamical system is either a known linear map, or an uncertain linear map that can change over time. In both cases, we study a converging sequence of polyhedral outer approximations and (lifted) spectrahedral inner approximations to S\mathcal{S}. Our inner approximations are optimized with respect to the objective function ff and their semidefinite characterization---which has a semidefinite constraint of fixed size---is obtained by applying polar duality to convex sets that are invariant under (multiple) linear maps. We characterize three barriers that can stop convergence of the outer approximations from being finite. We prove that once these barriers are removed, our inner and outer approximating procedures find an optimal solution and a certificate of optimality for the RDO problem in a finite number of steps. Moreover, in the case where the dynamics are linear, we show that this phenomenon occurs in a number of steps that can be computed in time polynomial in the bit size of the input data. Our analysis also leads to a polynomial-time algorithm for RDO instances where the spectral radius of the linear map is bounded above by any constant less than one. Finally, in our concluding section, we propose a broader research agenda for studying optimization problems with dynamical systems constraints, of which RDO is a special case

    Robust Region-of-Attraction Estimation

    Get PDF
    We propose a method to compute invariant subsets of the region-of-attraction for asymptotically stable equilibrium points of polynomial dynamical systems with bounded parametric uncertainty. Parameter-independent Lyapunov functions are used to characterize invariant subsets of the robust region-of-attraction. A branch-and-bound type refinement procedure reduces the conservatism. We demonstrate the method on an example from the literature and uncertain controlled short-period aircraft dynamics
    • 

    corecore