13,142 research outputs found

    On Pseudocodewords and Improved Union Bound of Linear Programming Decoding of HDPC Codes

    Full text link
    In this paper, we present an improved union bound on the Linear Programming (LP) decoding performance of the binary linear codes transmitted over an additive white Gaussian noise channels. The bounding technique is based on the second-order of Bonferroni-type inequality in probability theory, and it is minimized by Prim's minimum spanning tree algorithm. The bound calculation needs the fundamental cone generators of a given parity-check matrix rather than only their weight spectrum, but involves relatively low computational complexity. It is targeted to high-density parity-check codes, where the number of their generators is extremely large and these generators are spread densely in the Euclidean space. We explore the generator density and make a comparison between different parity-check matrix representations. That density effects on the improvement of the proposed bound over the conventional LP union bound. The paper also presents a complete pseudo-weight distribution of the fundamental cone generators for the BCH[31,21,5] code

    Round Compression for Parallel Matching Algorithms

    Get PDF
    For over a decade now we have been witnessing the success of {\em massive parallel computation} (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the {\em maximum matching} problem---one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn)O(\log{n}) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has n1+Ω(1)n^{1+\Omega(1)} memory, this problem can also be solved 22-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn)O(\log{n}) rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that perplexing possibility. That is, we break the above O(logn)O(\log n) round complexity bound even in the case of {\em slightly sublinear} memory per machine. In fact, our improvement here is {\em almost exponential}: we are able to deliver a (2+ϵ)(2+\epsilon)-approximation to maximum matching, for any fixed constant ϵ>0\epsilon>0, in O((loglogn)2)O((\log \log n)^2) rounds

    On the Distributed Complexity of Large-Scale Graph Computations

    Full text link
    Motivated by the increasing need to understand the distributed algorithmic foundations of large-scale graph computations, we study some fundamental graph problems in a message-passing model for distributed computing where k2k \geq 2 machines jointly perform computations on graphs with nn nodes (typically, nkn \gg k). The input graph is assumed to be initially randomly partitioned among the kk machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication {\em rounds} of the computation. Our main contribution is the {\em General Lower Bound Theorem}, a theorem that can be used to show non-trivial lower bounds on the round complexity of distributed large-scale data computations. The General Lower Bound Theorem is established via an information-theoretic approach that relates the round complexity to the minimal amount of information required by machines to solve the problem. Our approach is generic and this theorem can be used in a "cookbook" fashion to show distributed lower bounds in the context of several problems, including non-graph problems. We present two applications by showing (almost) tight lower bounds for the round complexity of two fundamental graph problems, namely {\em PageRank computation} and {\em triangle enumeration}. Our approach, as demonstrated in the case of PageRank, can yield tight lower bounds for problems (including, and especially, under a stochastic partition of the input) where communication complexity techniques are not obvious. Our approach, as demonstrated in the case of triangle enumeration, can yield stronger round lower bounds as well as message-round tradeoffs compared to approaches that use communication complexity techniques

    Distributed (Δ+1)(\Delta+1)-Coloring in Sublogarithmic Rounds

    Full text link
    We give a new randomized distributed algorithm for (Δ+1)(\Delta+1)-coloring in the LOCAL model, running in O(logΔ)+2O(loglogn)O(\sqrt{\log \Delta})+ 2^{O(\sqrt{\log \log n})} rounds in a graph of maximum degree~Δ\Delta. This implies that the (Δ+1)(\Delta+1)-coloring problem is easier than the maximal independent set problem and the maximal matching problem, due to their lower bounds of Ω(min(lognloglogn,logΔloglogΔ))\Omega \left( \min \left( \sqrt{\frac{\log n}{\log \log n}}, \frac{\log \Delta}{\log \log \Delta} \right) \right) by Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Our algorithm also extends to list-coloring where the palette of each node contains Δ+1\Delta+1 colors. We extend the set of distributed symmetry-breaking techniques by performing a decomposition of graphs into dense and sparse parts

    The Complexity of Distributed Edge Coloring with Small Palettes

    Full text link
    The complexity of distributed edge coloring depends heavily on the palette size as a function of the maximum degree Δ\Delta. In this paper we explore the complexity of edge coloring in the LOCAL model in different palette size regimes. 1. We simplify the \emph{round elimination} technique of Brandt et al. and prove that (2Δ2)(2\Delta-2)-edge coloring requires Ω(logΔlogn)\Omega(\log_\Delta \log n) time w.h.p. and Ω(logΔn)\Omega(\log_\Delta n) time deterministically, even on trees. The simplified technique is based on two ideas: the notion of an irregular running time and some general observations that transform weak lower bounds into stronger ones. 2. We give a randomized edge coloring algorithm that can use palette sizes as small as Δ+O~(Δ)\Delta + \tilde{O}(\sqrt{\Delta}), which is a natural barrier for randomized approaches. The running time of the algorithm is at most O(logΔTLLL)O(\log\Delta \cdot T_{LLL}), where TLLLT_{LLL} is the complexity of a permissive version of the constructive Lovasz local lemma. 3. We develop a new distributed Lovasz local lemma algorithm for tree-structured dependency graphs, which leads to a (1+ϵ)Δ(1+\epsilon)\Delta-edge coloring algorithm for trees running in O(loglogn)O(\log\log n) time. This algorithm arises from two new results: a deterministic O(logn)O(\log n)-time LLL algorithm for tree-structured instances, and a randomized O(loglogn)O(\log\log n)-time graph shattering method for breaking the dependency graph into independent O(logn)O(\log n)-size LLL instances. 4. A natural approach to computing (Δ+1)(\Delta+1)-edge colorings (Vizing's theorem) is to extend partial colorings by iteratively re-coloring parts of the graph. We prove that this approach may be viable, but in the worst case requires recoloring subgraphs of diameter Ω(Δlogn)\Omega(\Delta\log n). This stands in contrast to distributed algorithms for Brooks' theorem, which exploit the existence of O(logΔn)O(\log_\Delta n)-length augmenting paths
    corecore