103,216 research outputs found

    Integer priority queues with decrease key in constant time and the single source shortest paths problem

    Get PDF
    AbstractWe consider Fibonacci heap style integer priority queues supporting find-min, insert, and decrease key operations in constant time. We present a deterministic linear space solution that with n integer keys supports delete in O(loglogn) time. If the integers are in the range [0,N), we can also support delete in O(loglogN) time.Even for the special case of monotone priority queues, where the minimum has to be non-decreasing, the best previous bounds on delete were O((logn)1/(3−ε)) and O((logN)1/(4−ε)). These previous bounds used both randomization and amortization. Our new bounds are deterministic, worst-case, with no restriction to monotonicity, and exponentially faster.As a classical application, for a directed graph with n nodes and m edges with non-negative integer weights, we get single source shortest paths in O(m+nloglogn) time, or O(m+nloglogC) if C is the maximal edge weight. The latter solves an open problem of Ahuja, Mehlhorn, Orlin, and Tarjan from 1990

    Optimal lower bounds for universal relation, and for samplers and finding duplicates in streams

    Full text link
    In the communication problem UR\mathbf{UR} (universal relation) [KRW95], Alice and Bob respectively receive x,y{0,1}nx, y \in\{0,1\}^n with the promise that xyx\neq y. The last player to receive a message must output an index ii such that xiyix_i\neq y_i. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly Θ(min{n,log(1/δ)log2(nlog(1/δ))})\Theta(\min\{n,\log(1/\delta)\log^2(\frac n{\log(1/\delta)})\}) for failure probability δ\delta. Our lower bound holds even if promised support(y)support(x)\mathop{support}(y)\subset \mathop{support}(x). As a corollary, we obtain optimal lower bounds for p\ell_p-sampling in strict turnstile streams for 0p<20\le p < 2, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if promised x{0,1}nx\in\{0,1\}^n at all points in the stream. We give two different proofs of our main result. The first proof demonstrates that any algorithm A\mathcal A solving sampling problems in turnstile streams in low memory can be used to encode subsets of [n][n] of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to A\mathcal A throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoder's interactions with A\mathcal A, which is loosely motivated by techniques in differential privacy. Our second proof is via a novel randomized reduction from Augmented Indexing [MNSW98] which needs to interact with A\mathcal A adaptively. To handle the adaptivity we identify certain likely interaction patterns and union bound over them to guarantee correct interaction on all of them. To guarantee correctness, it is important that the interaction hides some of its randomness from A\mathcal A in the reduction.Comment: merge of arXiv:1703.08139 and of work of Kapralov, Woodruff, and Yahyazade

    On the similarities between generalized rank and Hamming weights and their applications to network coding

    Full text link
    Rank weights and generalized rank weights have been proven to characterize error and erasure correction, and information leakage in linear network coding, in the same way as Hamming weights and generalized Hamming weights describe classical error and erasure correction, and information leakage in wire-tap channels of type II and code-based secret sharing. Although many similarities between both cases have been established and proven in the literature, many other known results in the Hamming case, such as bounds or characterizations of weight-preserving maps, have not been translated to the rank case yet, or in some cases have been proven after developing a different machinery. The aim of this paper is to further relate both weights and generalized weights, show that the results and proofs in both cases are usually essentially the same, and see the significance of these similarities in network coding. Some of the new results in the rank case also have new consequences in the Hamming case

    B-LOG: A branch and bound methodology for the parallel execution of logic programs

    Get PDF
    We propose a computational methodology -"B-LOG"-, which offers the potential for an effective implementation of Logic Programming in a parallel computer. We also propose a weighting scheme to guide the search process through the graph and we apply the concepts of parallel "branch and bound" algorithms in order to perform a "best-first" search using an information theoretic bound. The concept of "session" is used to speed up the search process in a succession of similar queries. Within a session, we strongly modify the bounds in a local database, while bounds kept in a global database are weakly modified to provide a better initial condition for other sessions. We also propose an implementation scheme based on a database machine using "semantic paging", and the "B-LOG processor" based on a scoreboard driven controller

    Codes with Locality for Two Erasures

    Full text link
    In this paper, we study codes with locality that can recover from two erasures via a sequence of two local, parity-check computations. By a local parity-check computation, we mean recovery via a single parity-check equation associated to small Hamming weight. Earlier approaches considered recovery in parallel; the sequential approach allows us to potentially construct codes with improved minimum distance. These codes, which we refer to as locally 2-reconstructible codes, are a natural generalization along one direction, of codes with all-symbol locality introduced by Gopalan \textit{et al}, in which recovery from a single erasure is considered. By studying the Generalized Hamming Weights of the dual code, we derive upper bounds on the minimum distance of locally 2-reconstructible codes and provide constructions for a family of codes based on Tur\'an graphs, that are optimal with respect to this bound. The minimum distance bound derived here is universal in the sense that no code which permits all-symbol local recovery from 22 erasures can have larger minimum distance regardless of approach adopted. Our approach also leads to a new bound on the minimum distance of codes with all-symbol locality for the single-erasure case.Comment: 14 pages, 3 figures, Updated for improved readabilit

    Tight Bounds for Gomory-Hu-like Cut Counting

    Full text link
    By a classical result of Gomory and Hu (1961), in every edge-weighted graph G=(V,E,w)G=(V,E,w), the minimum stst-cut values, when ranging over all s,tVs,t\in V, take at most V1|V|-1 distinct values. That is, these (V2)\binom{|V|}{2} instances exhibit redundancy factor Ω(V)\Omega(|V|). They further showed how to construct from GG a tree (V,E,w)(V,E',w') that stores all minimum stst-cut values. Motivated by this result, we obtain tight bounds for the redundancy factor of several generalizations of the minimum stst-cut problem. 1. Group-Cut: Consider the minimum (A,B)(A,B)-cut, ranging over all subsets A,BVA,B\subseteq V of given sizes A=α|A|=\alpha and B=β|B|=\beta. The redundancy factor is Ωα,β(V)\Omega_{\alpha,\beta}(|V|). 2. Multiway-Cut: Consider the minimum cut separating every two vertices of SVS\subseteq V, ranging over all subsets of a given size S=k|S|=k. The redundancy factor is Ωk(V)\Omega_{k}(|V|). 3. Multicut: Consider the minimum cut separating every demand-pair in DV×VD\subseteq V\times V, ranging over collections of D=k|D|=k demand pairs. The redundancy factor is Ωk(Vk)\Omega_{k}(|V|^k). This result is a bit surprising, as the redundancy factor is much larger than in the first two problems. A natural application of these bounds is to construct small data structures that stores all relevant cut values, like the Gomory-Hu tree. We initiate this direction by giving some upper and lower bounds.Comment: This version contains additional references to previous work (which have some overlap with our results), see Bibliographic Update 1.

    Relaxation Bounds on the Minimum Pseudo-Weight of Linear Block Codes

    Full text link
    Just as the Hamming weight spectrum of a linear block code sheds light on the performance of a maximum likelihood decoder, the pseudo-weight spectrum provides insight into the performance of a linear programming decoder. Using properties of polyhedral cones, we find the pseudo-weight spectrum of some short codes. We also present two general lower bounds on the minimum pseudo-weight. The first bound is based on the column weight of the parity-check matrix. The second bound is computed by solving an optimization problem. In some cases, this bound is more tractable to compute than previously known bounds and thus can be applied to longer codes.Comment: To appear in the proceedings of the 2005 IEEE International Symposium on Information Theory, Adelaide, Australia, September 4-9, 200
    corecore