2,987 research outputs found

    Making Robust Decisions in Discrete Optimization Problems as a Game against Nature

    Get PDF
    In this paper a discrete optimization problem under uncertainty is discussed. Solving such a problem can be seen as a game against nature. In order to choose a solution, the minmax and minmax regret criteria can be applied. In this paper an extension of the known minmax (regret) approach is proposed. It is shown how different types of uncertainty can be simultaneously taken into account. Some exact and approximation algorithms for choosing a best solution are constructed.Discrete optimization, minmax, minmax regret, game against nature

    On Correcting Inputs: Inverse Optimization for Online Structured Prediction

    Get PDF
    Algorithm designers typically assume that the input data is correct, and then proceed to find "optimal" or "sub-optimal" solutions using this input data. However this assumption of correct data does not always hold in practice, especially in the context of online learning systems where the objective is to learn appropriate feature weights given some training samples. Such scenarios necessitate the study of inverse optimization problems where one is given an input instance as well as a desired output and the task is to adjust the input data so that the given output is indeed optimal. Motivated by learning structured prediction models, in this paper we consider inverse optimization with a margin, i.e., we require the given output to be better than all other feasible outputs by a desired margin. We consider such inverse optimization problems for maximum weight matroid basis, matroid intersection, perfect matchings, minimum cost maximum flows, and shortest paths and derive the first known results for such problems with a non-zero margin. The effectiveness of these algorithmic approaches to online learning for structured prediction is also discussed.Comment: Conference version to appear in FSTTCS, 201

    DPP-PMRF: Rethinking Optimization for a Probabilistic Graphical Model Using Data-Parallel Primitives

    Full text link
    We present a new parallel algorithm for probabilistic graphical model optimization. The algorithm relies on data-parallel primitives (DPPs), which provide portable performance over hardware architecture. We evaluate results on CPUs and GPUs for an image segmentation problem. Compared to a serial baseline, we observe runtime speedups of up to 13X (CPU) and 44X (GPU). We also compare our performance to a reference, OpenMP-based algorithm, and find speedups of up to 7X (CPU).Comment: LDAV 2018, October 201

    A note on the data-driven capacity of P2P networks

    Get PDF
    We consider two capacity problems in P2P networks. In the first one, the nodes have an infinite amount of data to send and the goal is to optimally allocate their uplink bandwidths such that the demands of every peer in terms of receiving data rate are met. We solve this problem through a mapping from a node-weighted graph featuring two labels per node to a max flow problem on an edge-weighted bipartite graph. In the second problem under consideration, the resource allocation is driven by the availability of the data resource that the peers are interested in sharing. That is a node cannot allocate its uplink resources unless it has data to transmit first. The problem of uplink bandwidth allocation is then equivalent to constructing a set of directed trees in the overlay such that the number of nodes receiving the data is maximized while the uplink capacities of the peers are not exceeded. We show that the problem is NP-complete, and provide a linear programming decomposition decoupling it into a master problem and multiple slave subproblems that can be resolved in polynomial time. We also design a heuristic algorithm in order to compute a suboptimal solution in a reasonable time. This algorithm requires only a local knowledge from nodes, so it should support distributed implementations. We analyze both problems through a series of simulation experiments featuring different network sizes and network densities. On large networks, we compare our heuristic and its variants with a genetic algorithm and show that our heuristic computes the better resource allocation. On smaller networks, we contrast these performances to that of the exact algorithm and show that resource allocation fulfilling a large part of the peer can be found, even for hard configuration where no resources are in excess.Comment: 10 pages, technical report assisting a submissio

    Network Interdiction Using Adversarial Traffic Flows

    Full text link
    Traditional network interdiction refers to the problem of an interdictor trying to reduce the throughput of network users by removing network edges. In this paper, we propose a new paradigm for network interdiction that models scenarios, such as stealth DoS attack, where the interdiction is performed through injecting adversarial traffic flows. Under this paradigm, we first study the deterministic flow interdiction problem, where the interdictor has perfect knowledge of the operation of network users. We show that the problem is highly inapproximable on general networks and is NP-hard even when the network is acyclic. We then propose an algorithm that achieves a logarithmic approximation ratio and quasi-polynomial time complexity for acyclic networks through harnessing the submodularity of the problem. Next, we investigate the robust flow interdiction problem, which adopts the robust optimization framework to capture the case where definitive knowledge of the operation of network users is not available. We design an approximation framework that integrates the aforementioned algorithm, yielding a quasi-polynomial time procedure with poly-logarithmic approximation ratio for the more challenging robust flow interdiction. Finally, we evaluate the performance of the proposed algorithms through simulations, showing that they can be efficiently implemented and yield near-optimal solutions

    The Limitations of Optimization from Samples

    Full text link
    In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature
    corecore