8 research outputs found

    Information Cost Tradeoffs for Augmented Index and Streaming Language Recognition

    Get PDF
    This paper makes three main contributions to the theory of communication complexity and stream computation. First, we present new bounds on the information complexity of AUGMENTED-INDEX. In contrast to analogous results for INDEX by Jain, Radhakrishnan and Sen [J. ACM, 2009], we have to overcome the significant technical challenge that protocols for AUGMENTED-INDEX may violate the "rectangle property" due to the inherent input sharing. Second, we use these bounds to resolve an open problem of Magniez, Mathieu and Nayak [STOC, 2010] that asked about the multi-pass complexity of recognizing Dyck languages. This results in a natural separation between the standard multi-pass model and the multi-pass model that permits reverse passes. Third, we present the first passive memory checkers that verify the interaction transcripts of priority queues, stacks, and double-ended queues. We obtain tight upper and lower bounds for these problems, thereby addressing an important sub-class of the memory checking framework of Blum et al. [Algorithmica, 1994]

    Information Complexity versus Corruption and Applications to Orthogonality and Gap-Hamming

    Full text link
    Three decades of research in communication complexity have led to the invention of a number of techniques to lower bound randomized communication complexity. The majority of these techniques involve properties of large submatrices (rectangles) of the truth-table matrix defining a communication problem. The only technique that does not quite fit is information complexity, which has been investigated over the last decade. Here, we connect information complexity to one of the most powerful "rectangular" techniques: the recently-introduced smooth corruption (or "smooth rectangle") bound. We show that the former subsumes the latter under rectangular input distributions. We conjecture that this subsumption holds more generally, under arbitrary distributions, which would resolve the long-standing direct sum question for randomized communication. As an application, we obtain an optimal Ω(n)\Omega(n) lower bound on the information complexity---under the {\em uniform distribution}---of the so-called orthogonality problem (ORT), which is in turn closely related to the much-studied Gap-Hamming-Distance (GHD). The proof of this bound is along the lines of recent communication lower bounds for GHD, but we encounter a surprising amount of additional technical detail

    Certifying Equality With Limited Interaction

    Get PDF

    Tight Time-Space Tradeoff for Mutual Exclusion

    No full text
    Mutual Exclusion is a fundamental problem in distributed computing. Proving upper and lower bounds on the RMR complexity of this problem and its variants has been a topic of intense research in the last two decades. We add a novel dimension to this research by proving matching lower and upper bounds on how RMR complexity trades off with space. Two exciting implications of our results are that constant RMR complexity is impossible with subpolynomial space and subpolynomial RMR complexity is impossible with constant space (for cache-coherent multiprocessors, regardless of how strong the hardware synchronization operations are). We believe that our technical contributions are equally exciting. A highlight is that, even though mutual exclusion is a “messy ” problem to analyze because of system details such as asynchrony and cache coherence, we show that a simple and purely combinatorial bin-pebble game that we design exactly captures the complexity of the mutual exclusion problem. Lower bound proofs in distributed computing are typically based on covering, bivalency, or other indistinguishability arguments. In contrast, our lower bounds are based on the potential method, and we believe this is the first use of this method in lower bounds for distributed computing
    corecore