33 research outputs found

    On the Approximability and Hardness of the Minimum Connected Dominating Set with Routing Cost Constraint

    Full text link
    In the problem of minimum connected dominating set with routing cost constraint, we are given a graph G=(V,E)G=(V,E), and the goal is to find the smallest connected dominating set DD of GG such that, for any two non-adjacent vertices uu and vv in GG, the number of internal nodes on the shortest path between uu and vv in the subgraph of GG induced by D{u,v}D \cup \{u,v\} is at most α\alpha times that in GG. For general graphs, the only known previous approximability result is an O(logn)O(\log n)-approximation algorithm (n=Vn=|V|) for α=1\alpha = 1 by Ding et al. For any constant α>1\alpha > 1, we give an O(n11α(logn)1α)O(n^{1-\frac{1}{\alpha}}(\log n)^{\frac{1}{\alpha}})-approximation algorithm. When α5\alpha \geq 5, we give an O(nlogn)O(\sqrt{n}\log n)-approximation algorithm. Finally, we prove that, when α=2\alpha =2, unless NPDTIME(npolylogn)NP \subseteq DTIME(n^{poly\log n}), for any constant ϵ>0\epsilon > 0, the problem admits no polynomial-time 2log1ϵn2^{\log^{1-\epsilon}n}-approximation algorithm, improving upon the Ω(logn)\Omega(\log n) bound by Du et al. (albeit under a stronger hardness assumption)

    An FPT Algorithm for Minimum Additive Spanner Problem

    Get PDF
    For a positive integer t and a graph G, an additive t-spanner of G is a spanning subgraph in which the distance between every pair of vertices is at most the original distance plus t. The Minimum Additive t-Spanner Problem is to find an additive t-spanner with the minimum number of edges in a given graph, which is known to be NP-hard. Since we need to care about global properties of graphs when we deal with additive t-spanners, the Minimum Additive t-Spanner Problem is hard to handle and hence only few results are known for it. In this paper, we study the Minimum Additive t-Spanner Problem from the viewpoint of parameterized complexity. We formulate a parameterized version of the problem in which the number of removed edges is regarded as a parameter, and give a fixed-parameter algorithm for it. We also extend our result to the case with both a multiplicative approximation factor ? and an additive approximation parameter ?, which we call (?, ?)-spanners

    Fault-Tolerant Spanners: Better and Simpler

    Full text link
    A natural requirement of many distributed structures is fault-tolerance: after some failures, whatever remains from the structure should still be effective for whatever remains from the network. In this paper we examine spanners of general graphs that are tolerant to vertex failures, and significantly improve their dependence on the number of faults rr, for all stretch bounds. For stretch k3k \geq 3 we design a simple transformation that converts every kk-spanner construction with at most f(n)f(n) edges into an rr-fault-tolerant kk-spanner construction with at most O(r3logn)f(2n/r)O(r^3 \log n) \cdot f(2n/r) edges. Applying this to standard greedy spanner constructions gives rr-fault tolerant kk-spanners with O~(r2n1+2k+1)\tilde O(r^{2} n^{1+\frac{2}{k+1}}) edges. The previous construction by Chechik, Langberg, Peleg, and Roddity [STOC 2009] depends similarly on nn but exponentially on rr (approximately like krk^r). For the case k=2k=2 and unit-length edges, an O(rlogn)O(r \log n)-approximation algorithm is known from recent work of Dinitz and Krauthgamer [arXiv 2010], where several spanner results are obtained using a common approach of rounding a natural flow-based linear programming relaxation. Here we use a different (stronger) LP relaxation and improve the approximation ratio to O(logn)O(\log n), which is, notably, independent of the number of faults rr. We further strengthen this bound in terms of the maximum degree by using the \Lovasz Local Lemma. Finally, we show that most of our constructions are inherently local by designing equivalent distributed algorithms in the LOCAL model of distributed computation.Comment: 17 page

    Improved Approximation for the Directed Spanner Problem

    Full text link
    We prove that the size of the sparsest directed k-spanner of a graph can be approximated in polynomial time to within a factor of O~(n)\tilde{O}(\sqrt{n}), for all k >= 3. This improves the O~(n2/3)\tilde{O}(n^{2/3})-approximation recently shown by Dinitz and Krauthgamer

    Fully Dynamic Algorithm for Top-kk Densest Subgraphs

    Full text link
    Given a large graph, the densest-subgraph problem asks to find a subgraph with maximum average degree. When considering the top-kk version of this problem, a na\"ive solution is to iteratively find the densest subgraph and remove it in each iteration. However, such a solution is impractical due to high processing cost. The problem is further complicated when dealing with dynamic graphs, since adding or removing an edge requires re-running the algorithm. In this paper, we study the top-kk densest-subgraph problem in the sliding-window model and propose an efficient fully-dynamic algorithm. The input of our algorithm consists of an edge stream, and the goal is to find the node-disjoint subgraphs that maximize the sum of their densities. In contrast to existing state-of-the-art solutions that require iterating over the entire graph upon any update, our algorithm profits from the observation that updates only affect a limited region of the graph. Therefore, the top-kk densest subgraphs are maintained by only applying local updates. We provide a theoretical analysis of the proposed algorithm and show empirically that the algorithm often generates denser subgraphs than state-of-the-art competitors. Experiments show an improvement in efficiency of up to five orders of magnitude compared to state-of-the-art solutions.Comment: 10 pages, 8 figures, accepted at CIKM 201

    Distance-Preserving Graph Contractions

    Get PDF
    Compression and sparsification algorithms are frequently applied in a preprocessing step before analyzing or optimizing large networks/graphs. In this paper we propose and study a new framework contracting edges of a graph (merging vertices into super-vertices) with the goal of preserving pairwise distances as accurately as possible. Formally, given an edge-weighted graph, the contraction should guarantee that for any two vertices at distance d, the corresponding super-vertices remain at distance at least varphi(d) in the contracted graph, where varphi is a tolerance function bounding the permitted distance distortion. We present a comprehensive picture of the algorithmic complexity of the contraction problem for affine tolerance functions varphi(x)=x/alpha-beta, where alpha geq 1 and beta geq 0 are arbitrary real-valued parameters. Specifically, we present polynomial-time algorithms for trees as well as hardness and inapproximability results for different graph classes, precisely separating easy and hard cases. Further we analyze the asymptotic behavior of the size of contractions, and find efficient algorithms to compute (non-optimal) contractions despite our hardness results

    Distance-generalized Core Decomposition

    Full text link
    The kk-core of a graph is defined as the maximal subgraph in which every vertex is connected to at least kk other vertices within that subgraph. In this work we introduce a distance-based generalization of the notion of kk-core, which we refer to as the (k,h)(k,h)-core, i.e., the maximal subgraph in which every vertex has at least kk other vertices at distance h\leq h within that subgraph. We study the properties of the (k,h)(k,h)-core showing that it preserves many of the nice features of the classic core decomposition (e.g., its connection with the notion of distance-generalized chromatic number) and it preserves its usefulness to speed-up or approximate distance-generalized notions of dense structures, such as hh-club. Computing the distance-generalized core decomposition over large networks is intrinsically complex. However, by exploiting clever upper and lower bounds we can partition the computation in a set of totally independent subcomputations, opening the door to top-down exploration and to multithreading, and thus achieving an efficient algorithm
    corecore