240 research outputs found

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    An Effective Branch and Bound Algorithm for Minimax Linear Fractional Programming

    Get PDF
    An effective branch and bound algorithm is proposed for globally solving minimax linear fractional programming problem (MLFP). In this algorithm, the lower bounds are computed during the branch and bound search by solving a sequence of linear relaxation programming problems (LRP) of the problem (MLFP), which can be derived by using a new linear relaxation bounding technique, and which can be effectively solved by the simplex method. The proposed branch and bound algorithm is convergent to the global optimal solution of the problem (MLFP) through the successive refinement of the feasible region and solutions of a series of the LRP. Numerical results for several test problems are reported to show the feasibility and effectiveness of the proposed algorithm

    Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs

    Get PDF
    In this paper, we consider a broad class of nonsmooth and nonconvex fractional programs, which encompass many important modern optimization problems arising from diverse areas such as the recently proposed scale-invariant sparse signal reconstruction problem in signal processing. We propose a proximal subgradient algorithm with extrapolations for solving this optimization model and show that the iterated sequence generated by the algorithm is bounded and that any one of its limit points is a stationary point of the model problem. The choice of our extrapolation parameter is flexible and includes the popular extrapolation parameter adopted in the restarted fast iterative shrinking-threshold algorithm (FISTA). By providing a unified analysis framework of descent methods, we establish the convergence of the full sequence under the assumption that a suitable merit function satisfies the Kurdyka–Łojasiewicz property. Our algorithm exhibits linear convergence for the scale-invariant sparse signal reconstruction problem and the Rayleigh quotient problem over spherical constraint. When the denominator is the maximum of finitely many continuously differentiable weakly convex functions, we also propose another extrapolated proximal subgradient algorithm with guaranteed convergence to a stronger notion of stationary points of the model problem. Finally, we illustrate the proposed methods by both analytical and simulated numerical examples. Copyright: © 2021 INFORMS

    Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference

    Full text link
    We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. In order to further speed up the computation, several strategies have been exploited, including model reduction, warm start and removal of inactive constraints. We analyze the performance of the proposed method under different settings, and demonstrate that our method either outperforms or performs on par with state-of-the-art approaches. Especially when the connectivities are dense or when the relative magnitudes of the unary costs are low, we achieve the best reported results. Experiments show that the proposed algorithm achieves better approximation than the state-of-the-art methods within a variety of time budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page

    Standard Bundle Methods: Untrusted Models and Duality

    Get PDF
    We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    Combining Lagrangian Decomposition and Excessive Gap Smoothing Technique for Solving Large-Scale Separable Convex Optimization Problems

    Full text link
    A new algorithm for solving large-scale convex optimization problems with a separable objective function is proposed. The basic idea is to combine three techniques: Lagrangian dual decomposition, excessive gap and smoothing. The main advantage of this algorithm is that it dynamically updates the smoothness parameters which leads to numerically robust performance. The convergence of the algorithm is proved under weak conditions imposed on the original problem. The rate of convergence is O(1k)O(\frac{1}{k}), where kk is the iteration counter. In the second part of the paper, the algorithm is coupled with a dual scheme to construct a switching variant of the dual decomposition. We discuss implementation issues and make a theoretical comparison. Numerical examples confirm the theoretical results.Comment: 29 pages, one figur
    corecore