115,928 research outputs found

    Improved Distributed Algorithms for Exact Shortest Paths

    Full text link
    Computing shortest paths is one of the central problems in the theory of distributed computing. For the last few years, substantial progress has been made on the approximate single source shortest paths problem, culminating in an algorithm of Becker et al. [DISC'17] which deterministically computes (1+o(1))(1+o(1))-approximate shortest paths in O~(D+n)\tilde O(D+\sqrt n) time, where DD is the hop-diameter of the graph. Up to logarithmic factors, this time complexity is optimal, matching the lower bound of Elkin [STOC'04]. The question of exact shortest paths however saw no algorithmic progress for decades, until the recent breakthrough of Elkin [STOC'17], which established a sublinear-time algorithm for exact single source shortest paths on undirected graphs. Shortly after, Huang et al. [FOCS'17] provided improved algorithms for exact all pairs shortest paths problem on directed graphs. In this paper, we present a new single-source shortest path algorithm with complexity O~(n3/4D1/4)\tilde O(n^{3/4}D^{1/4}). For polylogarithmic DD, this improves on Elkin's O~(n5/6)\tilde{O}(n^{5/6}) bound and gets closer to the Ω~(n1/2)\tilde{\Omega}(n^{1/2}) lower bound of Elkin [STOC'04]. For larger values of DD, we present an improved variant of our algorithm which achieves complexity O~(n3/4+o(1)+min{n3/4D1/6,n6/7}+D)\tilde{O}\left( n^{3/4+o(1)}+ \min\{ n^{3/4}D^{1/6},n^{6/7}\}+D\right), and thus compares favorably with Elkin's bound of O~(n5/6+n2/3D1/3+D)\tilde{O}(n^{5/6} + n^{2/3}D^{1/3} + D ) in essentially the entire range of parameters. This algorithm provides also a qualitative improvement, because it works for the more challenging case of directed graphs (i.e., graphs where the two directions of an edge can have different weights), constituting the first sublinear-time algorithm for directed graphs. Our algorithm also extends to the case of exact κ\kappa-source shortest paths...Comment: 26 page

    Distributed algorithms for edge dominating sets

    Get PDF
    An edge dominating set for a graph G is a set D of edges such that each edge of G is in D or adjacent to at least one edge in D. This work studies deterministic distributed approximation algorithms for finding minimum-size edge dominating sets. The focus is on anonymous port-numbered networks: there are no unique identifiers, but a node of degree d can refer to its neighbours by integers 1, 2, ..., d. The present work shows that in the port-numbering model, edge dominating sets can be approximated as follows: in d-regular graphs, to within 4 − 6/(d + 1) for an odd d and to within 4 − 2/d for an even d; and in graphs with maximum degree Δ, to within 4 − 2/(Δ − 1) for an odd Δ and to within 4 − 2/Δ for an even Δ. These approximation ratios are tight for all values of d and Δ: there are matching lower bounds.Peer reviewe

    Nearly optimal robust secret sharing

    Get PDF
    Abstract: We prove that a known approach to improve Shamir's celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for n parties against any collusion of size δn, for any constant δ ∈ (0; 1/2). This result holds in the so-called “nonrushing” model in which the n shares are submitted simultaneously for reconstruction. We thus finally obtain a simple, fully explicit, and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is k(1+o(1))+O(κ), where k is the secret length and κ is the security parameter. Like Shamir's scheme, in this modified scheme any set of more than δn honest parties can efficiently recover the secret. Using algebraic geometry codes instead of Reed-Solomon codes, the share length can be decreased to a constant (only depending on δ) while the number of shares n can grow independently. In this case, when n is large enough, the scheme satisfies the “threshold” requirement in an approximate sense; i.e., any set of δn(1 + ρ) honest parties, for arbitrarily small ρ > 0, can efficiently reconstruct the secret

    FALKON: An Optimal Large Scale Kernel Method

    Get PDF
    Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n)O(n) memory and O(nn)O(n\sqrt{n}) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.Comment: NIPS 201

    Parameterized Distributed Algorithms

    Get PDF
    In this work, we initiate a thorough study of graph optimization problems parameterized by the output size in the distributed setting. In such a problem, an algorithm decides whether a solution of size bounded by k exists and if so, it finds one. We study fundamental problems, including Minimum Vertex Cover (MVC), Maximum Independent Set (MaxIS), Maximum Matching (MaxM), and many others, in both the LOCAL and CONGEST distributed computation models. We present lower bounds for the round complexity of solving parameterized problems in both models, together with optimal and near-optimal upper bounds. Our results extend beyond the scope of parameterized problems. We show that any LOCAL (1+epsilon)-approximation algorithm for the above problems must take Omega(epsilon^{-1}) rounds. Joined with the (epsilon^{-1}log n)^{O(1)} rounds algorithm of [Ghaffari et al., 2017] and the Omega (sqrt{(log n)/(log log n)}) lower bound of [Fabian Kuhn et al., 2016], the lower bounds match the upper bound up to polynomial factors in both parameters. We also show that our parameterized approach reduces the runtime of exact and approximate CONGEST algorithms for MVC and MaxM if the optimal solution is small, without knowing its size beforehand. Finally, we propose the first o(n^2) rounds CONGEST algorithms that approximate MVC within a factor strictly smaller than 2

    Efficient Distributed Online Prediction and Stochastic Optimization with Approximate Distributed Averaging

    Full text link
    We study distributed methods for online prediction and stochastic optimization. Our approach is iterative: in each round nodes first perform local computations and then communicate in order to aggregate information and synchronize their decision variables. Synchronization is accomplished through the use of a distributed averaging protocol. When an exact distributed averaging protocol is used, it is known that the optimal regret bound of O(m)\mathcal{O}(\sqrt{m}) can be achieved using the distributed mini-batch algorithm of Dekel et al. (2012), where mm is the total number of samples processed across the network. We focus on methods using approximate distributed averaging protocols and show that the optimal regret bound can also be achieved in this setting. In particular, we propose a gossip-based optimization method which achieves the optimal regret bound. The amount of communication required depends on the network topology through the second largest eigenvalue of the transition matrix of a random walk on the network. In the setting of stochastic optimization, the proposed gossip-based approach achieves nearly-linear scaling: the optimization error is guaranteed to be no more than ϵ\epsilon after O(1nϵ2)\mathcal{O}(\frac{1}{n \epsilon^2}) rounds, each of which involves O(logn)\mathcal{O}(\log n) gossip iterations, when nodes communicate over a well-connected graph. This scaling law is also observed in numerical experiments on a cluster.Comment: 30 pages, 2 figure

    Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model

    Get PDF
    International audienceWe consider the problem of learning the optimal action-value function in discounted-reward Markov decision processes (MDPs). We prove new PAC bounds on the sample-complexity of two well-known model-based reinforcement learning (RL) algorithms in the presence of a generative model of the MDP: value iteration and policy iteration. The first result indicates that for an MDP with NN state-action pairs and the discount factor γin[0, 1) only O(Nlog(N/δ)/[(1γ)3ϵ2])O(N log(N/δ)/ [(1 - γ)3 \epsilon^2]) state-transition samples are required to find an ϵ\epsilon-optimal estimation of the action-value function with the probability (w.p.) 1-δ. Further, we prove that, for small values of ϵ\epsilon, an order of O(Nlog(N/δ)/[(1γ)3ϵ2])O(N log(N/δ)/ [(1 - γ)3 \epsilon^2]) samples is required to find an ϵ\epsilon -optimal policy w.p. 1-δ. We also prove a matching lower bound of Ω(Nlog(N/δ)/[(1γ)3ϵ2])\Omega(N log(N/δ)/ [(1 - γ)3\epsilon2]) on the sample complexity of estimating the optimal action-value function. To the best of our knowledge, this is the first minimax result on the sample complexity of RL: The upper bound matches the lower bound interms of NN , ϵ\epsilon, δ and 1/(1 -γ) up to a constant factor. Also, both our lower bound and upper bound improve on the state-of-the-art in terms of their dependence on 1/(1-γ)
    corecore