3,262 research outputs found

    A Simple Parallel and Distributed Sampling Technique: Local Glauber Dynamics

    Get PDF
    Sampling constitutes an important tool in a variety of areas: from machine learning and combinatorial optimization to computational physics and biology. A central class of sampling algorithms is the Markov Chain Monte Carlo method, based on the construction of a Markov chain with the desired sampling distribution as its stationary distribution. Many of the traditional Markov chains, such as the Glauber dynamics, do not scale well with increasing dimension. To address this shortcoming, we propose a simple local update rule based on the Glauber dynamics that leads to efficient parallel and distributed algorithms for sampling from Gibbs distributions. Concretely, we present a Markov chain that mixes in O(log n) rounds when Dobrushin\u27s condition for the Gibbs distribution is satisfied. This improves over the LubyGlauber algorithm by Feng, Sun, and Yin [PODC\u2717], which needs O(Delta log n) rounds, and their LocalMetropolis algorithm, which converges in O(log n) rounds but requires a considerably stronger mixing condition. Here, n denotes the number of nodes in the graphical model inducing the Gibbs distribution, and Delta its maximum degree. In particular, our method can sample a uniform proper coloring with alpha Delta colors in O(log n) rounds for any alpha >2, which almost matches the threshold of the sequential Glauber dynamics and improves on the alpha>2 + sqrt{2} threshold of Feng et al

    PROPAGATE: a seed propagation framework to compute Distance-based metrics on Very Large Graphs

    Full text link
    We propose PROPAGATE, a fast approximation framework to estimate distance-based metrics on very large graphs such as the (effective) diameter, the (effective) radius, or the average distance within a small error. The framework assigns seeds to nodes and propagates them in a BFS-like fashion, computing the neighbors set until we obtain either the whole vertex set (the diameter) or a given percentage (the effective diameter). At each iteration, we derive compressed Boolean representations of the neighborhood sets discovered so far. The PROPAGATE framework yields two algorithms: PROPAGATE-P, which propagates all the ss seeds in parallel, and PROPAGATE-s which propagates the seeds sequentially. For each node, the compressed representation of the PROPAGATE-P algorithm requires ss bits while that of PROPAGATE-S only 11 bit. Both algorithms compute the average distance, the effective diameter, the diameter, and the connectivity rate within a small error with high probability: for any Īµ>0\varepsilon>0 and using s=Ī˜(logā”nĪµ2)s=\Theta\left(\frac{\log n}{\varepsilon^2}\right) sample nodes, the error for the average distance is bounded by Ī¾=ĪµĪ”Ī±\xi = \frac{\varepsilon \Delta}{\alpha}, the error for the effective diameter and the diameter are bounded by Ī¾=ĪµĪ±\xi = \frac{\varepsilon}{\alpha}, and the error for the connectivity rate is bounded by Īµ\varepsilon where Ī”\Delta is the diameter and Ī±\alpha is a measure of connectivity of the graph. The time complexity is O(mĪ”logā”nĪµ2)\mathcal{O}\left(m\Delta \frac{\log n}{\varepsilon^2}\right), where mm is the number of edges of the graph. The experimental results show that the PROPAGATE framework improves the current state of the art both in accuracy and speed. Moreover, we experimentally show that PROPAGATE-S is also very efficient for solving the All Pair Shortest Path problem in very large graphs

    On the Hardware Implementation of Triangle Traversal Algorithms for Graphics Processing

    Full text link
    Current GPU architectures provide impressive processing rates in graphical applications because of their specialized graphics pipeline. However, little attention has been paid to the analysis and study of different hardware architectures to implement speciļ¬c pipeline stages. In this work we have identiļ¬ed one of the key stages in the graphics pipeline, the triangle traversal procedure, and we have implemented three different algorithms in hardware: bounding-box, zig-zag and Hilbert curve-based. The experimental results show that important area-performance trade-offs can be met when implementing key image processing algorithms in hardwar

    Distributed Computing with Channel Noise

    Get PDF
    A group of nn users want to run a distributed protocol Ļ€\pi over a network where communication occurs via private point-to-point channels. Unfortunately, an adversary, who knows Ļ€\pi, is able to maliciously flip bits on the channels. Can we efficiently simulate Ļ€\pi in the presence of such an adversary? We show that this is possible, even when LL, the number of bits sent in Ļ€\pi, and TT, the number of bits flipped by the adversary are not known in advance. In particular, we show how to create a robust version of Ļ€\pi that 1) fails with probability at most Ī“\delta, for any Ī“>0\delta > 0; and 2) sends O~(L+T)\tilde{O}(L+T) bits, where the O~\tilde{O} notation hides a logā”(nL/Ī“)\log(nL/\delta) term multiplying LL. Additionally, we show how to improve this result when the average message size Ī±\alpha is not constant. In particular, we give an algorithm that sends O(L(1+(1/Ī±)logā”(nL/Ī“)+T)O(L(1 + (1/\alpha) \log(nL/\delta) + T ) bits. This algorithm is adaptive in that it does not require a priori knowledge of Ī±\alpha. We note that if Ī±\alpha is Ī©(log(nL/Ī“))\Omega (log(nL/\delta)), then this improved algorithm sends only O(L+T)O(L + T) bits, and is therefore within a constant factor of optimal
    • ā€¦
    corecore