1,362 research outputs found

    Approximating Source Location and Star Survivable Network Problems

    Full text link
    In Source Location (SL) problems the goal is to select a mini-mum cost source set SVS \subseteq V such that the connectivity (or flow) ψ(S,v)\psi(S,v) from SS to any node vv is at least the demand dvd_v of vv. In many SL problems ψ(S,v)=dv\psi(S,v)=d_v if vSv \in S, namely, the demand of nodes selected to SS is completely satisfied. In a node-connectivity variant suggested recently by Fukunaga, every node vv gets a "bonus" pvdvp_v \leq d_v if it is selected to SS. Fukunaga showed that for undirected graphs one can achieve ratio O(klnk)O(k \ln k) for his variant, where k=maxvVdvk=\max_{v \in V}d_v is the maximum demand. We improve this by achieving ratio \min\{p^*\lnk,k\}\cdot O(\ln (k/q^*)) for a more general version with node capacities, where p=maxvVpvp^*=\max_{v \in V} p_v is the maximum bonus and q=minvVqvq^*=\min_{v \in V} q_v is the minimum capacity. In particular, for the most natural case p=1p^*=1 considered by Fukunaga, we improve the ratio from O(klnk)O(k \ln k) to O(ln2k)O(\ln^2k). We also get ratio O(k)O(k) for the edge-connectivity version, for which no ratio that depends on kk only was known before. To derive these results, we consider a particular case of the Survivable Network (SN) problem when all edges of positive cost form a star. We give ratio O(min{lnn,ln2k})O(\min\{\ln n,\ln^2 k\}) for this variant, improving over the best ratio known for the general case O(k3lnn)O(k^3 \ln n) of Chuzhoy and Khanna

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Designing Networks with Good Equilibria under Uncertainty

    Get PDF
    We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. A set of k terminal vertices or players need to establish connectivity with the root. The social optimum is the Minimum Steiner Tree. We are interested in situations where the designer has incomplete information about the input. We propose two different models, the adversarial and the stochastic. In both models, the designer has prior knowledge of the underlying metric but the requested subset of the players is not known and is activated either in an adversarial manner (adversarial model) or is drawn from a known probability distribution (stochastic model). In the adversarial model, the designer's goal is to choose a single, universal protocol that has low Price of Anarchy (PoA) for all possible requested subsets of players. The main question we address is: to what extent can prior knowledge of the underlying metric help in the design? We first demonstrate that there exist graphs (outerplanar) where knowledge of the underlying metric can dramatically improve the performance of good network design. Then, in our main technical result, we show that there exist graph metrics, for which knowing the underlying metric does not help and any universal protocol has PoA of Ω(logk)\Omega(\log k), which is tight. We attack this problem by developing new techniques that employ powerful tools from extremal combinatorics, and more specifically Ramsey Theory in high dimensional hypercubes. Then we switch to the stochastic model, where each player is independently activated. We show that there exists a randomized ordered protocol that achieves constant PoA. By using standard derandomization techniques, we produce a deterministic ordered protocol with constant PoA.Comment: This version has additional results about stochastic inpu

    Approximating Source Location and Star Survivable Network Problems

    Full text link
    Abstract. In Source Location (SL) problems the goal is to select a minimum cost source set S ⊆ V such that the connectivity (or flow) ψ(S, v) from S to any node v is at least the demand dv of v. In many SL problems ψ(S, v) = dv if v ∈ S, namely, the demand of nodes se-lected to S is completely satisfied. In a node-connectivity variant sug-gested recently by Fukunaga [6], every node v gets a “bonus ” pv ≤ dv if it is selected to S, namely, ψ(S, v) = pv + κ(S \ {v}, v) if v ∈ S and ψ(S, v) = κ(S, v) otherwise, where κ(S, v) is the maximum number of internally disjoint (S, v)-paths. While the approximability of many SL problems was seemingly settled to Θ(ln d(V)) in [18], Fukunaga [6] showed that for undirected graphs one can achieve ratio O(k ln k) for his variant, where k = maxv∈V dv is the maximum demand. We improve this by achieving ratio min{p ∗ ln k, k} · O(ln(k/q∗)) for a more general version with node capacities, where p ∗ = maxv∈V pv is the maximum bonus and q ∗ = minv∈V qv is the minimum capacity. In particular, for the most natural case p ∗ = 1 considered in [6] we improve the ratio from O(k ln k) to O(ln2 k). Our result also implies ratio k for the edge-connectivity version. To derive these results, we consider a particular case of the Survivable Network (SN) problem when all edges of positive cost form a star. We give ratio O(min{lnn, ln2 k}) for this variant, improving over the best ratio known for the general case O(k3 lnn) of Chuzhoy and Khanna [3]. In addition, we show that directed SL with unit costs is Ω(logn)-hard to approximate even for 0, 1 demands, while SL with uniform demands can be solved in polynomial time. Finally, we consider a generalization of SL where we also have edge-costs {ce: e ∈ E} and flow-cost bounds {bv: v ∈ V}, and require that for every node v, the minimum cost of a flow of value dv from S to v is at most bv. We show that this problem admits approximation ratio O(ln d(V) + ln(nc(E) − b(V)).

    Minimum Cost Topology Construction for Rural Wireless Mesh Networks

    Get PDF

    Greedy Algorithms for Online Survivable Network Design

    Get PDF
    In an instance of the network design problem, we are given a graph G=(V,E), an edge-cost function c:E -> R^{>= 0}, and a connectivity criterion. The goal is to find a minimum-cost subgraph H of G that meets the connectivity requirements. An important family of this class is the survivable network design problem (SNDP): given non-negative integers r_{u v} for each pair u,v in V, the solution subgraph H should contain r_{u v} edge-disjoint paths for each pair u and v. While this problem is known to admit good approximation algorithms in the offline case, the problem is much harder in the online setting. Gupta, Krishnaswamy, and Ravi [Gupta et al., 2012] (STOC\u2709) are the first to consider the online survivable network design problem. They demonstrate an algorithm with competitive ratio of O(k log^3 n), where k=max_{u,v} r_{u v}. Note that the competitive ratio of the algorithm by Gupta et al. grows linearly in k. Since then, an important open problem in the online community [Naor et al., 2011; Gupta et al., 2012] is whether the linear dependence on k can be reduced to a logarithmic dependency. Consider an online greedy algorithm that connects every demand by adding a minimum cost set of edges to H. Surprisingly, we show that this greedy algorithm significantly improves the competitive ratio when a congestion of 2 is allowed on the edges or when the model is stochastic. While our algorithm is fairly simple, our analysis requires a deep understanding of k-connected graphs. In particular, we prove that the greedy algorithm is O(log^2 n log k)-competitive if one satisfies every demand between u and v by r_{uv}/2 edge-disjoint paths. The spirit of our result is similar to the work of Chuzhoy and Li [Chuzhoy and Li, 2012] (FOCS\u2712), in which the authors give a polylogarithmic approximation algorithm for edge-disjoint paths with congestion 2. Moreover, we study the greedy algorithm in the online stochastic setting. We consider the i.i.d. model, where each online demand is drawn from a single probability distribution, the unknown i.i.d. model, where every demand is drawn from a single but unknown probability distribution, and the prophet model in which online demands are drawn from (possibly) different probability distributions. Through a different analysis, we prove that a similar greedy algorithm is constant competitive for the i.i.d. and the prophet models. Also, the greedy algorithm is O(log n)-competitive for the unknown i.i.d. model, which is almost tight due to the lower bound of [Garg et al., 2008] for single connectivity
    corecore