18 research outputs found

    Fast Partial Distance Estimation and Applications

    Full text link
    We study approximate distributed solutions to the weighted {\it all-pairs-shortest-paths} (APSP) problem in the CONGEST model. We obtain the following results. 1.1. A deterministic (1+o(1))(1+o(1))-approximation to APSP in O~(n)\tilde{O}(n) rounds. This improves over the best previously known algorithm, by both derandomizing it and by reducing the running time by a Θ(logn)\Theta(\log n) factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and require that these names are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of o(n/logn)o(n/\log n). In the relabeling model, we obtain the following results. 2.2. A randomized O(k)O(k)-approximation to APSP, for any integer k>1k>1, running in O~(n1/2+1/k+D)\tilde{O}(n^{1/2+1/k}+D) rounds, where DD is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from O(klogk)O(k\log k) to O(k)O(k). Also, the new algorithm uses uses labels of asymptotically optimal size, namely O(logn)O(\log n) bits. 3.3. A randomized O(k)O(k)-approximation to APSP, for any integer k>1k>1, running in time O~((nD)1/2n1/k+D)\tilde{O}((nD)^{1/2}\cdot n^{1/k}+D) and producing {\it compact routing tables} of size O~(n1/k)\tilde{O}(n^{1/k}). The node lables consist of O(klogn)O(k\log n) bits. This improves on the approximation ratio of Θ(k2)\Theta(k^2) for tables of that size achieved by the best previously known algorithm, which terminates faster, in O~(n1/2+1/k+D)\tilde{O}(n^{1/2+1/k}+D) rounds

    On Efficient Distributed Construction of Near Optimal Routing Schemes

    Full text link
    Given a distributed network represented by a weighted undirected graph G=(V,E)G=(V,E) on nn vertices, and a parameter kk, we devise a distributed algorithm that computes a routing scheme in (n1/2+1/k+D)no(1)(n^{1/2+1/k}+D)\cdot n^{o(1)} rounds, where DD is the hop-diameter of the network. The running time matches the lower bound of Ω~(n1/2+D)\tilde{\Omega}(n^{1/2}+D) rounds (which holds for any scheme with polynomial stretch), up to lower order terms. The routing tables are of size O~(n1/k)\tilde{O}(n^{1/k}), the labels are of size O(klog2n)O(k\log^2n), and every packet is routed on a path suffering stretch at most 4k5+o(1)4k-5+o(1). Our construction nearly matches the state-of-the-art for routing schemes built in a centralized sequential manner. The previous best algorithms for building routing tables in a distributed small messages model were by \cite[STOC 2013]{LP13} and \cite[PODC 2015]{LP15}. The former has similar properties but suffers from substantially larger routing tables of size O(n1/2+1/k)O(n^{1/2+1/k}), while the latter has sub-optimal running time of O~(min{(nD)1/2n1/k,n2/3+2/(3k)+D})\tilde{O}(\min\{(nD)^{1/2}\cdot n^{1/k},n^{2/3+2/(3k)}+D\})

    Parallel Metric Tree Embedding based on an Algebraic View on Moore-Bellman-Ford

    Full text link
    A \emph{metric tree embedding} of expected \emph{stretch~α1\alpha \geq 1} maps a weighted nn-node graph G=(V,E,ω)G = (V, E, \omega) to a weighted tree T=(VT,ET,ωT)T = (V_T, E_T, \omega_T) with VVTV \subseteq V_T such that, for all v,wVv,w \in V, dist(v,w,G)dist(v,w,T)\operatorname{dist}(v, w, G) \leq \operatorname{dist}(v, w, T) and operatornameE[dist(v,w,T)]αdist(v,w,G)operatorname{E}[\operatorname{dist}(v, w, T)] \leq \alpha \operatorname{dist}(v, w, G). Such embeddings are highly useful for designing fast approximation algorithms, as many hard problems are easy to solve on tree instances. However, to date the best parallel (polylogn)(\operatorname{polylog} n)-depth algorithm that achieves an asymptotically optimal expected stretch of αO(logn)\alpha \in \operatorname{O}(\log n) requires Ω(n2)\operatorname{\Omega}(n^2) work and a metric as input. In this paper, we show how to achieve the same guarantees using polylogn\operatorname{polylog} n depth and O~(m1+ϵ)\operatorname{\tilde{O}}(m^{1+\epsilon}) work, where m=Em = |E| and ϵ>0\epsilon > 0 is an arbitrarily small constant. Moreover, one may further reduce the work to O~(m+n1+ϵ)\operatorname{\tilde{O}}(m + n^{1+\epsilon}) at the expense of increasing the expected stretch to O(ϵ1logn)\operatorname{O}(\epsilon^{-1} \log n). Our main tool in deriving these parallel algorithms is an algebraic characterization of a generalization of the classic Moore-Bellman-Ford algorithm. We consider this framework, which subsumes a variety of previous "Moore-Bellman-Ford-like" algorithms, to be of independent interest and discuss it in depth. In our tree embedding algorithm, we leverage it for providing efficient query access to an approximate metric that allows sampling the tree using polylogn\operatorname{polylog} n depth and O~(m)\operatorname{\tilde{O}}(m) work. We illustrate the generality and versatility of our techniques by various examples and a number of additional results
    corecore