99,108 research outputs found

    Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

    Full text link
    We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward--Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in signal/image processing and machine learning

    Nonsmooth Control Barrier Functions for Obstacle Avoidance between Convex Regions

    Full text link
    In this paper, we focus on non-conservative obstacle avoidance between robots with control affine dynamics with strictly convex and polytopic shapes. The core challenge for this obstacle avoidance problem is that the minimum distance between strictly convex regions or polytopes is generally implicit and non-smooth, such that distance constraints cannot be enforced directly in the optimization problem. To handle this challenge, we employ non-smooth control barrier functions to reformulate the avoidance problem in the dual space, with the positivity of the minimum distance between robots equivalently expressed using a quadratic program. Our approach is proven to guarantee system safety. We theoretically analyze the smoothness properties of the minimum distance quadratic program and its KKT conditions. We validate our approach by demonstrating computationally-efficient obstacle avoidance for multi-agent robotic systems with strictly convex and polytopic shapes. To our best knowledge, this is the first time a real-time QP problem can be formulated for general non-conservative avoidance between strictly convex shapes and polytopes.Comment: 17 page

    Continuation of Nesterov's Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging

    Full text link
    Predictive models can be used on high-dimensional brain images for diagnosis of a clinical condition. Spatial regularization through structured sparsity offers new perspectives in this context and reduces the risk of overfitting the model while providing interpretable neuroimaging signatures by forcing the solution to adhere to domain-specific constraints. Total Variation (TV) enforces spatial smoothness of the solution while segmenting predictive regions from the background. We consider the problem of minimizing the sum of a smooth convex loss, a non-smooth convex penalty (whose proximal operator is known) and a wide range of possible complex, non-smooth convex structured penalties such as TV or overlapping group Lasso. Existing solvers are either limited in the functions they can minimize or in their practical capacity to scale to high-dimensional imaging data. Nesterov's smoothing technique can be used to minimize a large number of non-smooth convex structured penalties but reasonable precision requires a small smoothing parameter, which slows down the convergence speed. To benefit from the versatility of Nesterov's smoothing technique, we propose a first order continuation algorithm, CONESTA, which automatically generates a sequence of decreasing smoothing parameters. The generated sequence maintains the optimal convergence speed towards any globally desired precision. Our main contributions are: To propose an expression of the duality gap to probe the current distance to the global optimum in order to adapt the smoothing parameter and the convergence speed. We provide a convergence rate, which is an improvement over classical proximal gradient smoothing methods. We demonstrate on both simulated and high-dimensional structural neuroimaging data that CONESTA significantly outperforms many state-of-the-art solvers in regard to convergence speed and precision.Comment: 11 pages, 6 figures, accepted in IEEE TMI, IEEE Transactions on Medical Imaging 201

    SPEEDING-UP A RANDOM SEARCH FOR THE GLOBAL MINIMUM OF A NON-CONVEX, NON-SMOOTH OBJECTIVE FUNCTION

    Get PDF
    The need to find the global minimum of a highly non-convex, non-smooth objective function over a high-dimensional and possibly disconnected, feasible domain, within a practical amount of computing time, arises in many fields. Such objective functions and/or feasible domains are so poorly-behaved that gradient-based optimization methods are useful only locally – if at all. Random search methods offer a viable alternative, but their convergence properties are not well-studied. The present work adapts a proof by Baba et al. (1977) to establish asymptotic convergence for Monotonic Basin Hopping, a random search method used in molecular modeling and interplanetary spacecraft trajectory optimization. In addition, the present work uses the framework of First Passage Times (the time required for the first arrival to within a very small distance of the global minimum) and Gamma distribution approximations to First Passage Time Densities, to study MBH convergence speed. The present work then provides analytically supported methods for speeding up Monotonic Basin Hopping. The speed-up methods are novel, complementary, and can be used separately or in combination. Their effectiveness is shown to be dramatic in the case of MBH operating on different highly non-convex, non-smooth objective functions and complicated feasible domains. In addition, explanations are provided as to why some speed-up methods are very effective on some highly non-convex, non-smooth objective functions having complicated feasible domains, but other methods are relatively ineffective. The present work is the first systematic study of the MBH convergence process and methods for speeding it up, as opposed to applications of MBH

    Quantized Consensus ADMM for Multi-Agent Distributed Optimization

    Get PDF
    Multi-agent distributed optimization over a network minimizes a global objective formed by a sum of local convex functions using only local computation and communication. We develop and analyze a quantized distributed algorithm based on the alternating direction method of multipliers (ADMM) when inter-agent communications are subject to finite capacity and other practical constraints. While existing quantized ADMM approaches only work for quadratic local objectives, the proposed algorithm can deal with more general objective functions (possibly non-smooth) including the LASSO. Under certain convexity assumptions, our algorithm converges to a consensus within log1+ηΩ\log_{1+\eta}\Omega iterations, where η>0\eta>0 depends on the local objectives and the network topology, and Ω\Omega is a polynomial determined by the quantization resolution, the distance between initial and optimal variable values, the local objective functions and the network topology. A tight upper bound on the consensus error is also obtained which does not depend on the size of the network.Comment: 30 pages, 4 figures; to be submitted to IEEE Trans. Signal Processing. arXiv admin note: text overlap with arXiv:1307.5561 by other author
    corecore