33 research outputs found

    Lifting with Sunflowers

    Get PDF
    Query-to-communication lifting theorems translate lower bounds on query complexity to lower bounds for the corresponding communication model. In this paper, we give a simplified proof of deterministic lifting (in both the tree-like and dag-like settings). Our proof uses elementary counting together with a novel connection to the sunflower lemma. In addition to a simplified proof, our approach opens up a new avenue of attack towards proving lifting theorems with improved gadget size - one of the main challenges in the area. Focusing on one of the most widely used gadgets - the index gadget - existing lifting techniques are known to require at least a quadratic gadget size. Our new approach combined with robust sunflower lemmas allows us to reduce the gadget size to near linear. We conjecture that it can be further improved to polylogarithmic, similar to the known bounds for the corresponding robust sunflower lemmas

    Lifting Theorems Meet Information Complexity: Known and New Lower Bounds of Set-disjointness

    Full text link
    Set-disjointness problems are one of the most fundamental problems in communication complexity and have been extensively studied in past decades. Given its importance, many lower bound techniques were introduced to prove communication lower bounds of set-disjointness. Combining ideas from information complexity and query-to-communication lifting theorems, we introduce a density increment argument to prove communication lower bounds for set-disjointness: We give a simple proof showing that a large rectangle cannot be 00-monochromatic for multi-party unique-disjointness. We interpret the direct-sum argument as a density increment process and give an alternative proof of randomized communication lower bounds for multi-party unique-disjointness. Avoiding full simulations in lifting theorems, we simplify and improve communication lower bounds for sparse unique-disjointness. Potential applications to be unified and improved by our density increment argument are also discussed.Comment: Working Pape

    Constant-Depth Circuits vs. Monotone Circuits

    Get PDF

    Classical and quantum sublinear algorithms

    Get PDF
    This thesis investigates the capabilities of classical and quantum sublinear algorithms through the lens of complexity theory. The formal classification of problems between “tractable” (by constructing efficient algorithms that solve them) and “intractable” (by proving no efficient algorithm can) is among the most fruitful lines of work in theoretical computer science, which includes, amongst an abundance of fundamental results and open problems, the notorious P vs. NP question. This particular incarnation of the decision-versus-verification question stems from a choice of computational model: polynomial-time Turing machines. It is far from the only model worthy of investigation, however; indeed, measuring time up to polynomial factors is often too “coarse” for practical applications. We focus on quantum computation, a more complete model of physically realisable computation where quantum mechanical phenomena (such as interference and entanglement) may be used as computational resources; and sublinear algorithms, a formalisation of ultra-fast computation where merely reading or storing the entire input is impractical, e.g., when processing massive datasets such as social networks or large databases. We begin our investigation by studying structural properties of local algorithms, a large class of sublinear algorithms that includes property testers and is characterised by the inability to even see most of the input. We prove that, in this setting, queries – the main complexity measure – can be replaced with random samples. Applying this transformation yields, among other results, the state-of-the-art query lower bound for relaxed local decoders. Focusing our attention onto property testers, we begin to chart the complexity�theoretic landscape arising from the classical vs. quantum and decision vs. verification questions in testing. We show that quantum hardware and communication with a powerful but untrusted prover are “orthogonal” resources, so that one cannot be substituted for the other. This implies all of the possible separations among the analogues of QMA, MA and BQP in the property-testing setting. We conclude with a study of zero-knowledge for (classical) streaming algorithms, which receive one-pass access to the entirety of their input but only have sublinear space. Inspired by cryptographic tools, we construct commitment protocols that are unconditionally secure in the streaming model and can be leveraged to obtain zero-knowledge streaming interactive proofs – and, in particular, show that zero-knowledge is achievable in this model

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF

    Term Association Modelling in Information Retrieval

    Get PDF
    Many traditional Information Retrieval (IR) models assume that query terms are independent of each other. For those models, a document is normally represented as a bag of words/terms and their frequencies. Although traditional retrieval models can achieve reasonably good performance in many applications, the corresponding independence assumption has limitations. There are some recent studies that investigate how to model term associations/dependencies by proximity measures. However, the modeling of term associations theoretically under the probabilistic retrieval framework is still largely unexplored. In this thesis, I propose a new concept named Cross Term, to model term proximity, with the aim of boosting retrieval performance. With Cross Terms, the association of multiple query terms can be modeled in the same way as a simple unigram term. In particular, an occurrence of a query term is assumed to have an impact on its neighboring text. The degree of the query term impact gradually weakens with increasing distance from the place of occurrence. Shape functions are used to characterize such impacts. Based on this assumption, I first propose a bigram CRoss TErm Retrieval (CRTER2) model for probabilistic IR and a Language model based model CRTER2LM. Specifically, a bigram Cross Term occurs when the corresponding query terms appear close to each other, and its impact can be modeled by the intersection of the respective shape functions of the query terms. Second, I propose a generalized n-gram CRoss TErm Retrieval (CRTERn) model recursively for n query terms where n>2. For n-gram Cross Term, I develop several distance metrics with different properties and employ them in the proposed models for ranking. Third, an enhanced context-sensitive proximity model is proposed to boost the CRTER models, where the contextual relevance of term proximity is studied. The models are validated on several large standard data sets, and show improved performance over other state-of-art approaches. I also discusse the practical impact of the proposed models. The approaches in this thesis can also provide helpful benefit for term association modeling in other domains

    TME Volume 3, Number 2

    Get PDF

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    29th International Symposium on Algorithms and Computation: ISAAC 2018, December 16-19, 2018, Jiaoxi, Yilan, Taiwan

    Get PDF
    corecore