403 research outputs found

    A Cycle-Based Formulation and Valid Inequalities for DC Power Transmission Problems with Switching

    Full text link
    It is well-known that optimizing network topology by switching on and off transmission lines improves the efficiency of power delivery in electrical networks. In fact, the USA Energy Policy Act of 2005 (Section 1223) states that the U.S. should "encourage, as appropriate, the deployment of advanced transmission technologies" including "optimized transmission line configurations". As such, many authors have studied the problem of determining an optimal set of transmission lines to switch off to minimize the cost of meeting a given power demand under the direct current (DC) model of power flow. This problem is known in the literature as the Direct-Current Optimal Transmission Switching Problem (DC-OTS). Most research on DC-OTS has focused on heuristic algorithms for generating quality solutions or on the application of DC-OTS to crucial operational and strategic problems such as contingency correction, real-time dispatch, and transmission expansion. The mathematical theory of the DC-OTS problem is less well-developed. In this work, we formally establish that DC-OTS is NP-Hard, even if the power network is a series-parallel graph with at most one load/demand pair. Inspired by Kirchoff's Voltage Law, we give a cycle-based formulation for DC-OTS, and we use the new formulation to build a cycle-induced relaxation. We characterize the convex hull of the cycle-induced relaxation, and the characterization provides strong valid inequalities that can be used in a cutting-plane approach to solve the DC-OTS. We give details of a practical implementation, and we show promising computational results on standard benchmark instances

    Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

    Full text link
    Data-driven algorithm design, that is, choosing the best algorithm for a specific application, is a crucial problem in modern data science. Practitioners often optimize over a parameterized algorithm family, tuning parameters based on problems from their domain. These procedures have historically come with no guarantees, though a recent line of work studies algorithm selection from a theoretical perspective. We advance the foundations of this field in several directions: we analyze online algorithm selection, where problems arrive one-by-one and the goal is to minimize regret, and private algorithm selection, where the goal is to find good parameters over a set of problems without revealing sensitive information contained therein. We study important algorithm families, including SDP-rounding schemes for problems formulated as integer quadratic programs, and greedy techniques for canonical subset selection problems. In these cases, the algorithm's performance is a volatile and piecewise Lipschitz function of its parameters, since tweaking the parameters can completely change the algorithm's behavior. We give a sufficient and general condition, dispersion, defining a family of piecewise Lipschitz functions that can be optimized online and privately, which includes the functions measuring the performance of the algorithms we study. Intuitively, a set of piecewise Lipschitz functions is dispersed if no small region contains many of the functions' discontinuities. We present general techniques for online and private optimization of the sum of dispersed piecewise Lipschitz functions. We improve over the best-known regret bounds for a variety of problems, prove regret bounds for problems not previously studied, and give matching lower bounds. We also give matching upper and lower bounds on the utility loss due to privacy. Moreover, we uncover dispersion in auction design and pricing problems

    Submodularity in Action: From Machine Learning to Signal Processing Applications

    Full text link
    Submodularity is a discrete domain functional property that can be interpreted as mimicking the role of the well-known convexity/concavity properties in the continuous domain. Submodular functions exhibit strong structure that lead to efficient optimization algorithms with provable near-optimality guarantees. These characteristics, namely, efficiency and provable performance bounds, are of particular interest for signal processing (SP) and machine learning (ML) practitioners as a variety of discrete optimization problems are encountered in a wide range of applications. Conventionally, two general approaches exist to solve discrete problems: (i)(i) relaxation into the continuous domain to obtain an approximate solution, or (ii)(ii) development of a tailored algorithm that applies directly in the discrete domain. In both approaches, worst-case performance guarantees are often hard to establish. Furthermore, they are often complex, thus not practical for large-scale problems. In this paper, we show how certain scenarios lend themselves to exploiting submodularity so as to construct scalable solutions with provable worst-case performance guarantees. We introduce a variety of submodular-friendly applications, and elucidate the relation of submodularity to convexity and concavity which enables efficient optimization. With a mixture of theory and practice, we present different flavors of submodularity accompanying illustrative real-world case studies from modern SP and ML. In all cases, optimization algorithms are presented, along with hints on how optimality guarantees can be established

    Distributed top-k aggregation queries at large

    Get PDF
    Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network

    A Global Optimisation Toolbox for Massively Parallel Engineering Optimisation

    Full text link
    A software platform for global optimisation, called PaGMO, has been developed within the Advanced Concepts Team (ACT) at the European Space Agency, and was recently released as an open-source project. PaGMO is built to tackle high-dimensional global optimisation problems, and it has been successfully used to find solutions to real-life engineering problems among which the preliminary design of interplanetary spacecraft trajectories - both chemical (including multiple flybys and deep-space maneuvers) and low-thrust (limited, at the moment, to single phase trajectories), the inverse design of nano-structured radiators and the design of non-reactive controllers for planetary rovers. Featuring an arsenal of global and local optimisation algorithms (including genetic algorithms, differential evolution, simulated annealing, particle swarm optimisation, compass search, improved harmony search, and various interfaces to libraries for local optimisation such as SNOPT, IPOPT, GSL and NLopt), PaGMO is at its core a C++ library which employs an object-oriented architecture providing a clean and easily-extensible optimisation framework. Adoption of multi-threaded programming ensures the efficient exploitation of modern multi-core architectures and allows for a straightforward implementation of the island model paradigm, in which multiple populations of candidate solutions asynchronously exchange information in order to speed-up and improve the optimisation process. In addition to the C++ interface, PaGMO's capabilities are exposed to the high-level language Python, so that it is possible to easily use PaGMO in an interactive session and take advantage of the numerous scientific Python libraries available.Comment: To be presented at 'ICATT 2010: International Conference on Astrodynamics Tools and Techniques

    Algorithms for weighted multidimensional search and perfect phylogeny

    Get PDF
    This dissertation is a collection of papers from two independent areas: convex optimization problems in R[superscript]d and the construction of evolutionary trees;The paper on convex optimization problems in R[superscript]d gives improved algorithms for solving the Lagrangian duals of problems that have both of the following properties. First, in absence of the bad constraints, the problems can be solved in strongly polynomial time by combinatorial algorithms. Second, the number of bad constraints is fixed. As part of our solution to these problems, we extend Cole\u27s circuit simulation approach and develop a weighted version of Megiddo\u27s multidimensional search technique;The papers on evolutionary tree construction deal with the perfect phylogeny problem, where species are specified by a set of characters and each character can occur in a species in one of a fixed number of states. This problem is known to be NP-complete. The dissertation contains the following results on the perfect phylogeny problem: (1) A linear time algorithm when all the characters have two states. (2) A polynomial time algorithm when the number of character states is fixed. (3) A polynomial time algorithm when the number of characters is fixed

    Higher-order cover cuts from zero–one knapsack constraints augmented by two-sided bounding inequalities

    Get PDF
    AbstractExtending our work on second-order cover cuts [F. Glover, H.D. Sherali, Second-order cover cuts, Mathematical Programming (ISSN: 0025-5610 1436-4646) (2007), doi:10.1007/s10107-007-0098-4. (Online)], we introduce a new class of higher-order cover cuts that are derived from the implications of a knapsack constraint in concert with supplementary two-sided inequalities that bound the sums of sets of variables. The new cuts can be appreciably stronger than the second-order cuts, which in turn dominate the classical knapsack cover inequalities. The process of generating these cuts makes it possible to sequentially utilize the second-order cuts by embedding them in systems that define the inequalities from which the higher-order cover cuts are derived. We characterize properties of these cuts, design specialized procedures to generate them, and establish associated dominance relationships. These results are used to devise an algorithm that generates all non-dominated higher-order cover cuts, and, in particular, to formulate and solve suitable separation problems for deriving a higher-order cut that deletes a given fractional solution to an underlying continuous relaxation. We also discuss a lifting procedure for further tightening any generated cut, and establish its polynomial-time operation for unit-coefficient cuts. A numerical example is presented that illustrates these procedures and the relative strength of the generated non-redundant, non-dominated higher-order cuts, all of which turn out to be facet-defining for this example. Some preliminary computational results are also presented to demonstrate the efficacy of these cuts in comparison with lifted minimal cover inequalities for the underlying knapsack polytope

    Parallel Genetic Algorithms with GPU Computing

    Get PDF
    Genetic algorithms (GAs) are powerful solutions to optimization problems arising from manufacturing and logistic fields. It helps to find better solutions for complex and difficult cases, which are hard to be solved by using strict optimization methods. Accelerating parallel GAs with GPU computing have received significant attention from both practitioners and researchers, ever since the emergence of GPU-CPU heterogeneous architectures. Designing a parallel algorithm on GPU is different fundamentally from designing one on CPU. On CPU architecture, typically data or tasks are distributed across tens of threads or processes, while on GPU architecture, more than hundreds of thousands of threads run. In order to fully utilize the computing power of GPUs, the design approaches and implementation strategies of parallel GAs should be re-probed. In the chapter, a concise overview of parallel GAs on GPU is given from the perspective of GPU architecture. The concept of parallelism granularity is redefined, the aspect of data layout is discussed on how it will affect the kernel performance, and the hierarchy of threads is examined on how threads are organized in the grid and blocks to expose sufficient parallelism to GPU. Some future research is discussed. A hybrid parallel model, based on the feature of GPU architecture, is suggested to build up efficient parallel GAs for hyper-scale problems

    Parallelization of Ant Colony Optimization via Area of Expertise Learning

    Get PDF
    Ant colony optimization algorithms have long been touted as providing an effective and efficient means of generating high quality solutions to NP-hard optimization problems. Unfortunately, while the structure of the algorithm is easy to parallelize, the nature and amount of communication required for parallel execution has meant that parallel implementations developed suffer from decreased solution quality, slower runtime performance, or both. This thesis explores a new strategy for ant colony parallelization that involves Area of Expertise (AOE) learning. The AOE concept is based on the idea that individual agents tend to gain knowledge of different areas of the search space when left to their own devices. After developing a sense of their own expertness on a portion of the problem domain, agents share information and incorporate knowledge from other agents without having to experience it first-hand. This thesis shows that when incorporated within parallel ACO and applied to multi-objective environments such as a gridworld, the use of AOE learning can be an effective and efficient means of coordinating the efforts of multiple ant colony agents working in tandem, resulting in increased performance. Based on the success of the AOE/ACO combination in gridworld, a similar configuration is applied to the single objective traveling salesman problem. Yet while it was hoped that AOE learning would allow for a fast and beneficial sharing of knowledge between colonies, this goal was not achieved, despite the efforts detailed within. The reason for this lack of performance is due to the nature of the TSP, whose single objective landscape discourages colonies from learning unique portions of the search space. Without this specialization, AOE was found to make parallel ACO faster than the use of a single large colony but less efficient than multiple independent colonies

    A New General-Purpose Algorithm for Mixed-Integer Bilevel Linear Programs

    Get PDF
    International audienceBilevel optimization problems are very challenging optimization models arising in many important practical contexts, including pricing mechanisms in the energy sector, airline and telecommunication industry, transportation networks, critical infrastructure defense, and machine learning. In this paper, we consider bilevel programs with continuous and discrete variables at both levels, with linear objectives and constraints (continuous upper level variables, if any, must not appear in the lower level problem). We propose a general-purpose branch-and-cut exact solution method based on several new classes of valid inequalities, which also exploits a very effective bilevel-specific preprocessing procedure. An extensive computational study is presented to evaluate the performance of various solution methods on a common testbed of more than 800 instances from the literature and 60 randomly generated instances. Our new algorithm consistently outperforms (often by a large margin) alternative state-of-the-art methods from the literature, including methods exploiting problem-specific information for special instance classes. In particular, it solves to optimality more than 300 previously unsolved instances from the literature. To foster research on this challenging topic, our solver is made publicly available online
    corecore