15,424 research outputs found

    An inflationary differential evolution algorithm for space trajectory optimization

    Get PDF
    In this paper we define a discrete dynamical system that governs the evolution of a population of agents. From the dynamical system, a variant of Differential Evolution is derived. It is then demonstrated that, under some assumptions on the differential mutation strategy and on the local structure of the objective function, the proposed dynamical system has fixed points towards which it converges with probability one for an infinite number of generations. This property is used to derive an algorithm that performs better than standard Differential Evolution on some space trajectory optimization problems. The novel algorithm is then extended with a guided restart procedure that further increases the performance, reducing the probability of stagnation in deceptive local minima.Comment: IEEE Transactions on Evolutionary Computation 2011. ISSN 1089-778

    A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization

    Get PDF
    We propose a new first-order primal-dual optimization framework for a convex optimization template with broad applications. Our optimization algorithms feature optimal convergence guarantees under a variety of common structure assumptions on the problem template. Our analysis relies on a novel combination of three classic ideas applied to the primal-dual gap function: smoothing, acceleration, and homotopy. The algorithms due to the new approach achieve the best known convergence rate results, in particular when the template consists of only non-smooth functions. We also outline a restart strategy for the acceleration to significantly enhance the practical performance. We demonstrate relations with the augmented Lagrangian method and show how to exploit the strongly convex objectives with rigorous convergence rate guarantees. We provide numerical evidence with two examples and illustrate that the new methods can outperform the state-of-the-art, including Chambolle-Pock, and the alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech. Report, Oct. 2015 (last update Sept. 2016

    Identification of the Isotherm Function in Chromatography Using CMA-ES

    Get PDF
    This paper deals with the identification of the flux for a system of conservation laws in the specific example of analytic chromatography. The fundamental equations of chromatographic process are highly non linear. The state-of-the-art Evolution Strategy, CMA-ES (the Covariance Matrix Adaptation Evolution Strategy), is used to identify the parameters of the so-called isotherm function. The approach was validated on different configurations of simulated data using either one, two or three components mixtures. CMA-ES is then applied to real data cases and its results are compared to those of a gradient-based strategy

    Computational Complexity versus Statistical Performance on Sparse Recovery Problems

    Get PDF
    We show that several classical quantities controlling compressed sensing performance directly match classical parameters controlling algorithmic complexity. We first describe linearly convergent restart schemes on first-order methods solving a broad range of compressed sensing problems, where sharpness at the optimum controls convergence speed. We show that for sparse recovery problems, this sharpness can be written as a condition number, given by the ratio between true signal sparsity and the largest signal size that can be recovered by the observation matrix. In a similar vein, Renegar's condition number is a data-driven complexity measure for convex programs, generalizing classical condition numbers for linear systems. We show that for a broad class of compressed sensing problems, the worst case value of this algorithmic complexity measure taken over all signals matches the restricted singular value of the observation matrix which controls robust recovery performance. Overall, this means in both cases that, in compressed sensing problems, a single parameter directly controls both computational complexity and recovery performance. Numerical experiments illustrate these points using several classical algorithms.Comment: Final version, to appear in information and Inferenc

    A comparison of RESTART implementations

    Get PDF
    The RESTART method is a widely applicable simulation technique for the estimation of rare event probabilities. The method is based on the idea to restart the simulation in certain system states, in order to generate more occurrences of the rare event. One of the main questions for any RESTART implementation is how and when to restart the simulation, in order to achieve the most accurate results for a fixed simulation effort. We investigate and compare, both theoretically and empirically, different implementations of the RESTART method. We find that the original RESTART implementation, in which each path is split into a fixed number of copies, may not be the most efficient one. It is generally better to fix the total simulation effort for each stage of the simulation. Furthermore, given this effort, the best strategy is to restart an equal number of times from each state, rather than to restart each time from a randomly chosen stat
    • 

    corecore