132 research outputs found

    Separable Convex Optimization with Nested Lower and Upper Constraints

    Full text link
    We study a convex resource allocation problem in which lower and upper bounds are imposed on partial sums of allocations. This model is linked to a large range of applications, including production planning, speed optimization, stratified sampling, support vector machines, portfolio management, and telecommunications. We propose an efficient gradient-free divide-and-conquer algorithm, which uses monotonicity arguments to generate valid bounds from the recursive calls, and eliminate linking constraints based on the information from sub-problems. This algorithm does not need strict convexity or differentiability. It produces an ϵ\epsilon-approximate solution for the continuous problem in O(nlogmlognBϵ)\mathcal{O}(n \log m \log \frac{n B}{\epsilon}) time and an integer solution in O(nlogmlogB)\mathcal{O}(n \log m \log B) time, where nn is the number of decision variables, mm is the number of constraints, and BB is the resource bound. A complexity of O(nlogm)\mathcal{O}(n \log m) is also achieved for the linear and quadratic cases. These are the best complexities known to date for this important problem class. Our experimental analyses confirm the good performance of the method, which produces optimal solutions for problems with up to 1,000,000 variables in a few seconds. Promising applications to the support vector ordinal regression problem are also investigated

    Decomposition Methods for Nonlinear Optimization and Data Mining

    Full text link
    We focus on two central themes in this dissertation. The first one is on decomposing polytopes and polynomials in ways that allow us to perform nonlinear optimization. We start off by explaining important results on decomposing a polytope into special polyhedra. We use these decompositions and develop methods for computing a special class of integrals exactly. Namely, we are interested in computing the exact value of integrals of polynomial functions over convex polyhedra. We present prior work and new extensions of the integration algorithms. Every integration method we present requires that the polynomial has a special form. We explore two special polynomial decomposition algorithms that are useful for integrating polynomial functions. Both polynomial decompositions have strengths and weaknesses, and we experiment with how to practically use them. After developing practical algorithms and efficient software tools for integrating a polynomial over a polytope, we focus on the problem of maximizing a polynomial function over the continuous domain of a polytope. This maximization problem is NP-hard, but we develop approximation methods that run in polynomial time when the dimension is fixed. Moreover, our algorithm for approximating the maximum of a polynomial over a polytope is related to integrating the polynomial over the polytope. We show how the integration methods can be used for optimization. The second central topic in this dissertation is on problems in data science. We first consider a heuristic for mixed-integer linear optimization. We show how many practical mixed-integer linear have a special substructure containing set partition constraints. We then describe a nice data structure for finding feasible zero-one integer solutions to systems of set partition constraints. Finally, we end with an applied project using data science methods in medical research.Comment: PHD Thesis of Brandon Dutr

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San José (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and Málaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International Scientific Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...

    An overview of population-based algorithms for multi-objective optimisation

    Get PDF
    In this work we present an overview of the most prominent population-based algorithms and the methodologies used to extend them to multiple objective problems. Although not exact in the mathematical sense, it has long been recognised that population-based multi-objective optimisation techniques for real-world applications are immensely valuable and versatile. These techniques are usually employed when exact optimisation methods are not easily applicable or simply when, due to sheer complexity, such techniques could potentially be very costly. Another advantage is that since a population of decision vectors is considered in each generation these algorithms are implicitly parallelisable and can generate an approximation of the entire Pareto front at each iteration. A critique of their capabilities is also provided

    A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables

    Full text link
    We propose a gradient-based method for quadratic programming problems with a single linear constraint and bounds on the variables. Inspired by the GPCG algorithm for bound-constrained convex quadratic programming [J.J. Mor\'e and G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases until convergence: an identification phase, which performs gradient projection iterations until either a candidate active set is identified or no reasonable progress is made, and an unconstrained minimization phase, which reduces the objective function in a suitable space defined by the identification phase, by applying either the conjugate gradient method or a recently proposed spectral gradient method. However, the algorithm differs from GPCG not only because it deals with a more general class of problems, but mainly for the way it stops the minimization phase. This is based on a comparison between a measure of optimality in the reduced space and a measure of bindingness of the variables that are on the bounds, defined by extending the concept of proportioning, which was proposed by some authors for box-constrained problems. If the objective function is bounded, the algorithm converges to a stationary point thanks to a suitable application of the gradient projection method in the identification phase. For strictly convex problems, the algorithm converges to the optimal solution in a finite number of steps even in case of degeneracy. Extensive numerical experiments show the effectiveness of the proposed approach.Comment: 30 pages, 17 figure

    Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training

    Get PDF
    We consider the convex quadratic linearly constrained problem with bounded variables and with huge and dense Hessian matrix that arises in many applications such as the training problem of bias support vector machines. We propose a decomposition algorithmic scheme suitable to parallel implementations and we prove global convergence under suitable conditions. Focusing on support vector machines training, we outline how these assumptions can be satisfied in practice and we suggest various specific implementations. Extensions of the theoretical results to general linearly constrained problem are provided. We included numerical results on support vector machines with the aim of showing the viability and the effectiveness of the proposed scheme

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    Adaptable optimization : theory and algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 189-200).Optimization under uncertainty is a central ingredient for analyzing and designing systems with incomplete information. This thesis addresses uncertainty in optimization, in a dynamic framework where information is revealed sequentially, and future decisions are adaptable, i.e., they depend functionally on the information revealed in the past. Such problems arise in applications where actions are repeated over a time horizon (e.g., portfolio management, or dynamic scheduling problems), or that have multiple planning stages (e.g., network design). The first part of the thesis focuses on the robust optimization approach to systems with uncertainty. Unlike the probability-driven stochastic programming approach, robust optimization is built on deterministic set-based formulations of uncertainty. This thesis seeks to place Robust Optimization within a dynamic framework. In particular, we introduce the notion of finite adaptability. Using geometric results, we characterize the benefits of adaptability, and use these theoretical results to design efficient algorithms for finding near-optimal protocols. Among the novel contributions of the work are the capacity to accommodate discrete variables, and the development of a hierarchy of adaptability.(cont.) The second part of the thesis takes a data-driven view to uncertainty. The central questions are (a) how can we construct adaptability in multi-stage optimization problems given only data, and (b) what feasibility guarantees can we provide. Multi-stage Stochastic Optimization typically requires exponentially many data points. Robust Optimization, on the other hand, has a very limited ability to address multi-stage optimization in an adaptable manner. We present a hybrid sample-based robust optimization methodology for constructing adaptability in multi-stage optimization problems, that is both tractable and also flexible, offering a hierarchy of adaptability. We prove polynomial upper bounds on sample complexity. We further extend our results to multi-stage problems with integer variables in the future stages. We illustrate the ideas above on several problems in Network Design, and Portfolio Optimization. The last part of the thesis focuses on an application of adaptability, in particular, the ideas of finite adaptability from the first part of the thesis, to the problem of air traffic control. The main problem is to sequentially schedule the departures, routes, ground-holding, and air-holding, for every flight over the national air space (NAS).(cont.) The schedule seeks to minimize the aggregate delay incurred, while satisfying capacity constraints that specify the maximum number of flights that can take off or land at a particular airport, or fly over the same sector of the NAS at any given time. These capacities are impacted by the weather conditions. Since we receive an initial weather forecast, and then updates throughout the day, we naturally have a multistage optimization problem, with sequentially revealed uncertainty. We show that finite adaptability is natural, since the scheduling problem is inherently finite, and furthermore the uncertainty set is low-dimensional. We illustrate both the applicability of finite adaptability, and also its effectiveness, through several examples.by Constantine Caramanis.Ph.D
    corecore