11 research outputs found

    Approximation Algorithms for Union and Intersection Covering Problems

    Get PDF
    In a classical covering problem, we are given a set of requests that we need to satisfy (fully or partially), by buying a subset of items at minimum cost. For example, in the k-MST problem we want to find the cheapest tree spanning at least k nodes of an edge-weighted graph. Here nodes and edges represent requests and items, respectively. In this paper, we initiate the study of a new family of multi-layer covering problems. Each such problem consists of a collection of h distinct instances of a standard covering problem (layers), with the constraint that all layers share the same set of requests. We identify two main subfamilies of these problems: - in a union multi-layer problem, a request is satisfied if it is satisfied in at least one layer; - in an intersection multi-layer problem, a request is satisfied if it is satisfied in all layers. To see some natural applications, consider both generalizations of k-MST. Union k-MST can model a problem where we are asked to connect a set of users to at least one of two communication networks, e.g., a wireless and a wired network. On the other hand, intersection k-MST can formalize the problem of connecting a subset of users to both electricity and water. We present a number of hardness and approximation results for union and intersection versions of several standard optimization problems: MST, Steiner tree, set cover, facility location, TSP, and their partial covering variants

    Network Design via Core Detouring for Problems Without a Core

    Get PDF
    Some of the currently best-known approximation algorithms for network design are based on random sampling. One of the key steps of such algorithms is connecting a set of source nodes to a random subset of them. In a recent work [Eisenbrand,Grandoni,Rothvo\ss,Schäfer-SODA'08], a new technique, \emph{core-detouring}, is described to bound the mentioned connection cost. This is achieved by defining a sub-optimal connection scheme, where paths are detoured through a proper connected subgraph (core). The cost of the detoured paths is bounded against the cost of the core and of the distances from the sources to the core. The analysis then boils down to proving the \emph{existence} of a convenient core. For some problems, such as connected facility location and single-sink rent-or-buy, the choice of the core is obvious (i.e., the Steiner tree in the optimum solution). Other, more complex network design problems do not exhibit any such core. In this paper we show that core-detouring can be nonetheless successfully applied. The basic idea is constructing a convenient core by manipulating the optimal solution in a proper (not necessarily trivial) way. We illustrate that by presenting improved approximation algorithms for two well-studied problems: virtual private network design and single-sink buy-at-bulk

    Approximating Connected Facility Location Problems via Random Facility Sampling and Core Detouring

    Get PDF
    We present a simple randomized algorithmic framework for connected facility location problems. The basic idea is as follows: We run a black-box approximation algorithm for the unconnected facility location problem, randomly sample the clients, and open the facilities serving sampled clients in the approximate solution. Via a novel analytical tool, which we term core detouring, we show that this approach significantly improves over the previously best known approximation ratios for several NP-hard network design problems. For example, we reduce the approximation ratio for the connected facility location problem from 8.55 to 4.00, and for the single-sink rent-or-buy problem from 3.55 to 2.92. We show that our connected facility location algorithms can be derandomized at the expense of a slightly worse approximation ratio. The versatility of our framework is demonstrated by devising improved approximation algorithms also for other related problems

    Approximating connected facility location problems via Random facility sampling and core detouring

    Get PDF
    We present a simple randomized algorithmic framework for connected facility location problems. The basic idea is as follows: We run a black-box approximation algorithm for the unconnected facility location problem, randomly sample the clients, and open the facilities serving sampled clients in the approximate solution. Via a novel analytical tool, which we term core detouring, we show that this approach significantly improves over the previously best known approximation ratios for several NP-hard network design problems. For example, we reduce the approximation ratio for the connected facility location problem from 8.55 to 4.00 and for the single-sink rent-or-buy problem from 3.55 to 2.92. We show that our connected facility location algorithms can be derandomized at the expense of a slightly worse approximation ratio. The versatility of our framework is demonstrated by devising improved approximation algorithms also for other related problems

    On critical service recovery after massive network failures

    Get PDF
    This paper addresses the problem of efficiently restoring sufficient resources in a communications network to support the demand of mission critical services after a large-scale disruption. We give a formulation of the problem as a mixed integer linear programming and show that it is NP-hard. We propose a polynomial time heuristic, called iterative split and prune (ISP) that decomposes the original problem recursively into smaller problems, until it determines the set of network components to be restored. ISP's decisions are guided by the use of a new notion of demand-based centrality of nodes. We performed extensive simulations by varying the topologies, the demand intensity, the number of critical services, and the disruption model. Compared with several greedy approaches, ISP performs better in terms of total cost of repaired components, and does not result in any demand loss. It performs very close to the optimal when the demand is low with respect to the supply network capacities, thanks to the ability of the algorithm to maximize sharing of repaired resources

    Simple cost sharing schemes for multicommodity rent-or-buy and stochastic Steiner tree

    No full text

    Designing Network Protocols for Good Equilibria

    Get PDF
    Designing and deploying a network protocol determines the rules by which end users interact with each other and with the network. We consider the problem of designing a protocol to optimize the equilibrium behavior of a network with selfish users. We consider network cost-sharing games, where the set of Nash equilibria depends fundamentally on the choice of an edge cost-sharing protocol. Previous research focused on the Shapley protocol, in which the cost of each edge is shared equally among its users. We systematically study the design of optimal cost-sharing protocols for undirected and directed graphs, single-sink and multicommodity networks, and different measures of the inefficiency of equilibria. Our primary technical tool is a precise characterization of the cost-sharing protocols that induce only network games with pure-strategy Nash equilibria. We use this characterization to prove, among other results, that the Shapley protocol is optimal in directed graphs and that simple priority protocols are essentially optimal in undirected graphs

    Approximation Algorithms for Distributionally Robust Stochastic Optimization

    Get PDF
    Two-stage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we take first-stage actions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise. To address this, an approach that is quite versatile and has gained popularity in the stochastic-optimization literature is the two-stage distributionally robust stochastic model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D. There has been almost no prior work however on developing approximation algorithms for distributionally robust problems where the underlying scenario collection is discrete, as is the case with discrete-optimization problems. We provide frameworks for designing approximation algorithms in such settings when the collection D is a ball around a central distribution, defined relative to two notions of distance between probability distributions: Wasserstein metrics (which include the L_1 metric) and the L_infinity metric. Our frameworks yield efficient algorithms even in settings with an exponential number of scenarios, where the central distribution may only be accessed via a sampling oracle. For distributionally robust optimization under a Wasserstein ball, we first show that one can utilize the sample average approximation (SAA) method (solve the distributionally robust problem with an empirical estimate of the central distribution) to reduce the problem to the case where the central distribution has a polynomial-size support, and is represented explicitly. This follows because we argue that a distributionally robust problem can be reduced in a novel way to a standard two-stage stochastic problem with bounded inflation factor, which enables one to use the SAA machinery developed for two-stage stochastic problems. Complementing this, we show how to approximately solve a fractional relaxation of the SAA problem (i.e., the distributionally robust problem obtained by replacing the original central distribution with its empirical estimate). Unlike in two-stage {stochastic, robust} optimization with polynomially many scenarios, this turns out to be quite challenging. We utilize a variant of the ellipsoid method for convex optimization in conjunction with several new ideas to show that the SAA problem can be approximately solved provided that we have an (approximation) algorithm for a certain max-min problem that is akin to, and generalizes, the k-max-min problem (find the worst-case scenario consisting of at most k elements) encountered in two-stage robust optimization. We obtain such an algorithm for various discrete-optimization problems; by complementing this via rounding algorithms that provide local (i.e., per-scenario) approximation guarantees, we obtain the first approximation algorithms for the distributionally robust versions of a variety of discrete-optimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)-factors of the guarantees known for the deterministic version of the problem. For distributionally robust optimization under an L_infinity ball, we consider a fractional relaxation of the problem, and replace its objective function with a proxy function that is pointwise close to the true objective function (within a factor of 2). We then show that we can efficiently compute approximate subgradients of the proxy function, provided that we have an algorithm for the problem of computing the t worst scenarios under a given first-stage decision, given an integer t. We can then approximately minimize the proxy function via a variant of the ellipsoid method, and thus obtain an approximate solution for the fractional relaxation of the distributionally robust problem. Complementing this via rounding algorithms with local guarantees, we obtain approximation algorithms for distributionally robust versions of various covering problems, including set cover, vertex cover, edge cover, and facility location, with guarantees that are within O(1)-factors of the guarantees known for their deterministic versions
    corecore