501 research outputs found

    Improved Distance Oracles and Spanners for Vertex-Labeled Graphs

    Full text link
    Consider an undirected weighted graph G=(V,E) with |V|=n and |E|=m, where each vertex v is assigned a label from a set L of \ell labels. We show how to construct a compact distance oracle that can answer queries of the form: "what is the distance from v to the closest lambda-labeled node" for a given node v in V and label lambda in L. This problem was introduced by Hermelin, Levy, Weimann and Yuster [ICALP 2011] where they present several results for this problem. In the first result, they show how to construct a vertex-label distance oracle of expected size O(kn^{1+1/k}) with stretch (4k - 5) and query time O(k). In a second result, they show how to reduce the size of the data structure to O(kn \ell^{1/k}) at the expense of a huge stretch, the stretch of this construction grows exponentially in k, (2^k-1). In the third result they present a dynamic vertex-label distance oracle that is capable of handling label changes in a sub-linear time. The stretch of this construction is also exponential in k, (2 3^{k-1}+1). We manage to significantly improve the stretch of their constructions, reducing the dependence on k from exponential to polynomial (4k-5), without requiring any tradeoff regarding any of the other variables. In addition, we introduce the notion of vertex-label spanners: subgraphs that preserve distances between every node v and label lambda. We present an efficient construction for vertex-label spanners with stretch-size tradeoff close to optimal

    Efficient Dynamic Approximate Distance Oracles for Vertex-Labeled Planar Graphs

    Full text link
    Let GG be a graph where each vertex is associated with a label. A Vertex-Labeled Approximate Distance Oracle is a data structure that, given a vertex vv and a label λ\lambda, returns a (1+ε)(1+\varepsilon)-approximation of the distance from vv to the closest vertex with label λ\lambda in GG. Such an oracle is dynamic if it also supports label changes. In this paper we present three different dynamic approximate vertex-labeled distance oracles for planar graphs, all with polylogarithmic query and update times, and nearly linear space requirements

    Connectivity Oracles for Graphs Subject to Vertex Failures

    Full text link
    We introduce new data structures for answering connectivity queries in graphs subject to batched vertex failures. A deterministic structure processes a batch of ddd\leq d_{\star} failed vertices in O~(d3)\tilde{O}(d^3) time and thereafter answers connectivity queries in O(d)O(d) time. It occupies space O(dmlogn)O(d_{\star} m\log n). We develop a randomized Monte Carlo version of our data structure with update time O~(d2)\tilde{O}(d^2), query time O(d)O(d), and space O~(m)\tilde{O}(m) for any failure bound dnd\le n. This is the first connectivity oracle for general graphs that can efficiently deal with an unbounded number of vertex failures. We also develop a more efficient Monte Carlo edge-failure connectivity oracle. Using space O(nlog2n)O(n\log^2 n), dd edge failures are processed in O(dlogdloglogn)O(d\log d\log\log n) time and thereafter, connectivity queries are answered in O(loglogn)O(\log\log n) time, which are correct w.h.p. Our data structures are based on a new decomposition theorem for an undirected graph G=(V,E)G=(V,E), which is of independent interest. It states that for any terminal set UVU\subseteq V we can remove a set BB of U/(s2)|U|/(s-2) vertices such that the remaining graph contains a Steiner forest for UBU-B with maximum degree ss

    Prioritized Metric Structures and Embedding

    Full text link
    Metric data structures (distance oracles, distance labeling schemes, routing schemes) and low-distortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms \cite{llr}, online algorithms \cite{BBMN11}, distributed algorithms \cite{KKMPT12} and for computing sparsifiers \cite{ST04}. However, this methodology appears to have a limitation: the worst-case performance inherently depends on the cardinality of the metric, and one could not specify in advance which vertices/points should enjoy a better service (i.e., stretch/distortion, label size/dimension) than that given by the worst-case guarantee. In this paper we alleviate this limitation by devising a suit of {\em prioritized} metric data structures and embeddings. We show that given a priority ranking (x1,x2,,xn)(x_1,x_2,\ldots,x_n) of the graph vertices (respectively, metric points) one can devise a metric data structure (respectively, embedding) in which the stretch (resp., distortion) incurred by any pair containing a vertex xjx_j will depend on the rank jj of the vertex. We also show that other important parameters, such as the label size and (in some sense) the dimension, may depend only on jj. In some of our metric data structures (resp., embeddings) we achieve both prioritized stretch (resp., distortion) and label size (resp., dimension) {\em simultaneously}. The worst-case performance of our metric data structures and embeddings is typically asymptotically no worse than of their non-prioritized counterparts.Comment: To appear at STOC 201

    Hardness of Exact Distance Queries in Sparse Graphs Through Hub Labeling

    Full text link
    A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. An important class of distance labeling schemes is that of hub labelings, where a node vGv \in G stores its distance to the so-called hubs SvVS_v \subseteq V, chosen so that for any u,vVu,v \in V there is wSuSvw \in S_u \cap S_v belonging to some shortest uvuv path. Notice that for most existing graph classes, the best distance labelling constructions existing use at some point a hub labeling scheme at least as a key building block. Our interest lies in hub labelings of sparse graphs, i.e., those with E(G)=O(n)|E(G)| = O(n), for which we show a lowerbound of n2O(logn)\frac{n}{2^{O(\sqrt{\log n})}} for the average size of the hubsets. Additionally, we show a hub-labeling construction for sparse graphs of average size O(nRS(n)c)O(\frac{n}{RS(n)^{c}}) for some 0<c<10 < c < 1, where RS(n)RS(n) is the so-called Ruzsa-Szemer{\'e}di function, linked to structure of induced matchings in dense graphs. This implies that further improving the lower bound on hub labeling size to n2(logn)o(1)\frac{n}{2^{(\log n)^{o(1)}}} would require a breakthrough in the study of lower bounds on RS(n)RS(n), which have resisted substantial improvement in the last 70 years. For general distance labeling of sparse graphs, we show a lowerbound of 12O(logn)SumIndex(n)\frac{1}{2^{O(\sqrt{\log n})}} SumIndex(n), where SumIndex(n)SumIndex(n) is the communication complexity of the Sum-Index problem over ZnZ_n. Our results suggest that the best achievable hub-label size and distance-label size in sparse graphs may be Θ(n2(logn)c)\Theta(\frac{n}{2^{(\log n)^c}}) for some 0<c<10<c < 1

    Conditional Lower Bounds for Space/Time Tradeoffs

    Full text link
    In recent years much effort has been concentrated towards achieving polynomial time lower bounds on algorithms for solving various well-known problems. A useful technique for showing such lower bounds is to prove them conditionally based on well-studied hardness assumptions such as 3SUM, APSP, SETH, etc. This line of research helps to obtain a better understanding of the complexity inside P. A related question asks to prove conditional space lower bounds on data structures that are constructed to solve certain algorithmic tasks after an initial preprocessing stage. This question received little attention in previous research even though it has potential strong impact. In this paper we address this question and show that surprisingly many of the well-studied hard problems that are known to have conditional polynomial time lower bounds are also hard when concerning space. This hardness is shown as a tradeoff between the space consumed by the data structure and the time needed to answer queries. The tradeoff may be either smooth or admit one or more singularity points. We reveal interesting connections between different space hardness conjectures and present matching upper bounds. We also apply these hardness conjectures to both static and dynamic problems and prove their conditional space hardness. We believe that this novel framework of polynomial space conjectures can play an important role in expressing polynomial space lower bounds of many important algorithmic problems. Moreover, it seems that it can also help in achieving a better understanding of the hardness of their corresponding problems in terms of time
    corecore