56,184 research outputs found

    Particle algorithms for optimization on binary spaces

    Full text link
    We discuss a unified approach to stochastic optimization of pseudo-Boolean objective functions based on particle methods, including the cross-entropy method and simulated annealing as special cases. We point out the need for auxiliary sampling distributions, that is parametric families on binary spaces, which are able to reproduce complex dependency structures, and illustrate their usefulness in our numerical experiments. We provide numerical evidence that particle-driven optimization algorithms based on parametric families yield superior results on strongly multi-modal optimization problems while local search heuristics outperform them on easier problems

    Data-driven Inverse Optimization with Imperfect Information

    Full text link
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent's true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision implied by a particular candidate objective) differs from the agent's {\em actual} response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches

    On the Design of LQR Kernels for Efficient Controller Learning

    Full text link
    Finding optimal feedback controllers for nonlinear dynamic systems from data is hard. Recently, Bayesian optimization (BO) has been proposed as a powerful framework for direct controller tuning from experimental trials. For selecting the next query point and finding the global optimum, BO relies on a probabilistic description of the latent objective function, typically a Gaussian process (GP). As is shown herein, GPs with a common kernel choice can, however, lead to poor learning outcomes on standard quadratic control problems. For a first-order system, we construct two kernels that specifically leverage the structure of the well-known Linear Quadratic Regulator (LQR), yet retain the flexibility of Bayesian nonparametric learning. Simulations of uncertain linear and nonlinear systems demonstrate that the LQR kernels yield superior learning performance.Comment: 8 pages, 5 figures, to appear in 56th IEEE Conference on Decision and Control (CDC 2017

    An improved multi-parametric programming algorithm for flux balance analysis of metabolic networks

    Full text link
    Flux balance analysis has proven an effective tool for analyzing metabolic networks. In flux balance analysis, reaction rates and optimal pathways are ascertained by solving a linear program, in which the growth rate is maximized subject to mass-balance constraints. A variety of cell functions in response to environmental stimuli can be quantified using flux balance analysis by parameterizing the linear program with respect to extracellular conditions. However, for most large, genome-scale metabolic networks of practical interest, the resulting parametric problem has multiple and highly degenerate optimal solutions, which are computationally challenging to handle. An improved multi-parametric programming algorithm based on active-set methods is introduced in this paper to overcome these computational difficulties. Degeneracy and multiplicity are handled, respectively, by introducing generalized inverses and auxiliary objective functions into the formulation of the optimality conditions. These improvements are especially effective for metabolic networks because their stoichiometry matrices are generally sparse; thus, fast and efficient algorithms from sparse linear algebra can be leveraged to compute generalized inverses and null-space bases. We illustrate the application of our algorithm to flux balance analysis of metabolic networks by studying a reduced metabolic model of Corynebacterium glutamicum and a genome-scale model of Escherichia coli. We then demonstrate how the critical regions resulting from these studies can be associated with optimal metabolic modes and discuss the physical relevance of optimal pathways arising from various auxiliary objective functions. Achieving more than five-fold improvement in computational speed over existing multi-parametric programming tools, the proposed algorithm proves promising in handling genome-scale metabolic models.Comment: Accepted in J. Optim. Theory Appl. First draft was submitted on August 4th, 201

    An Exponential Lower Bound on the Complexity of Regularization Paths

    Full text link
    For a variety of regularized optimization problems in machine learning, algorithms computing the entire solution path have been developed recently. Most of these methods are quadratic programs that are parameterized by a single parameter, as for example the Support Vector Machine (SVM). Solution path algorithms do not only compute the solution for one particular value of the regularization parameter but the entire path of solutions, making the selection of an optimal parameter much easier. It has been assumed that these piecewise linear solution paths have only linear complexity, i.e. linearly many bends. We prove that for the support vector machine this complexity can be exponential in the number of training points in the worst case. More strongly, we construct a single instance of n input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) = \Theta(2^d) many distinct subsets of support vectors occur as the regularization parameter changes.Comment: Journal version, 28 Pages, 5 Figure

    OSQP: An Operator Splitting Solver for Quadratic Programs

    Full text link
    We present a general-purpose solver for convex quadratic programs based on the alternating direction method of multipliers, employing a novel operator splitting technique that requires the solution of a quasi-definite linear system with the same coefficient matrix at almost every iteration. Our algorithm is very robust, placing no requirements on the problem data such as positive definiteness of the objective function or linear independence of the constraint functions. It can be configured to be division-free once an initial matrix factorization is carried out, making it suitable for real-time applications in embedded systems. In addition, our technique is the first operator splitting method for quadratic programs able to reliably detect primal and dual infeasible problems from the algorithm iterates. The method also supports factorization caching and warm starting, making it particularly efficient when solving parametrized problems arising in finance, control, and machine learning. Our open-source C implementation OSQP has a small footprint, is library-free, and has been extensively tested on many problem instances from a wide variety of application areas. It is typically ten times faster than competing interior-point methods, and sometimes much more when factorization caching or warm start is used. OSQP has already shown a large impact with tens of thousands of users both in academia and in large corporations
    • …
    corecore