55 research outputs found

    Separating decision tree complexity from subcube partition complexity

    Get PDF
    The subcube partition model of computation is at least as powerful as decision trees but no separation between these models was known. We show that there exists a function whose deterministic subcube partition complexity is asymptotically smaller than its randomized decision tree complexity, resolving an open problem of Friedgut, Kahn, and Wigderson (2002). Our lower bound is based on the information-theoretic techniques first introduced to lower bound the randomized decision tree complexity of the recursive majority function. We also show that the public-coin partition bound, the best known lower bound method for randomized decision tree complexity subsuming other general techniques such as block sensitivity, approximate degree, randomized certificate complexity, and the classical adversary bound, also lower bounds randomized subcube partition complexity. This shows that all these lower bound techniques cannot prove optimal lower bounds for randomized decision tree complexity, which answers an open question of Jain and Klauck (2010) and Jain, Lee, and Vishnoi (2014).Comment: 16 pages, 1 figur

    Nearly Optimal Separations Between Communication (or Query) Complexity and Partitions

    Get PDF

    Low-Sensitivity Functions from Unambiguous Certificates

    Get PDF
    We provide new query complexity separations against sensitivity for total Boolean functions: a power 33 separation between deterministic (and even randomized or quantum) query complexity and sensitivity, and a power 2.222.22 separation between certificate complexity and sensitivity. We get these separations by using a new connection between sensitivity and a seemingly unrelated measure called one-sided unambiguous certificate complexity (UCminUC_{min}). We also show that UCminUC_{min} is lower-bounded by fractional block sensitivity, which means we cannot use these techniques to get a super-quadratic separation between bs(f)bs(f) and s(f)s(f). We also provide a quadratic separation between the tree-sensitivity and decision tree complexity of Boolean functions, disproving a conjecture of Gopalan, Servedio, Tal, and Wigderson (CCC 2016). Along the way, we give a power 1.221.22 separation between certificate complexity and one-sided unambiguous certificate complexity, improving the power 1.1281.128 separation due to G\"o\"os (FOCS 2015). As a consequence, we obtain an improved Ω(log1.22n)\Omega(\log^{1.22} n) lower-bound on the co-nondeterministic communication complexity of the Clique vs. Independent Set problem.Comment: 25 pages. This version expands the results and adds Pooya Hatami and Avishay Tal as author

    What Circuit Classes Can Be Learned with Non-Trivial Savings?

    Get PDF
    Despite decades of intensive research, efficient - or even sub-exponential time - distribution-free PAC learning algorithms are not known for many important Boolean function classes. In this work we suggest a new perspective on these learning problems, inspired by a surge of recent research in complexity theory, in which the goal is to determine whether and how much of a savings over a naive 2^n runtime can be achieved. We establish a range of exploratory results towards this end. In more detail, (1) We first observe that a simple approach building on known uniform-distribution learning results gives non-trivial distribution-free learning algorithms for several well-studied classes including AC0, arbitrary functions of a few linear threshold functions (LTFs), and AC0 augmented with mod_p gates. (2) Next we present an approach, based on the method of random restrictions from circuit complexity, which can be used to obtain several distribution-free learning algorithms that do not appear to be achievable by approach (1) above. The results achieved in this way include learning algorithms with non-trivial savings for LTF-of-AC0 circuits and improved savings for learning parity-of-AC0 circuits. (3) Finally, our third contribution is a generic technique for converting lower bounds proved using Neciporuk\u27s method to learning algorithms with non-trivial savings. This technique, which is the most involved of our three approaches, yields distribution-free learning algorithms for a range of classes where previously even non-trivial uniform-distribution learning algorithms were not known; these classes include full-basis formulas, branching programs, span programs, etc. up to some fixed polynomial size

    Separations in Query Complexity Based on Pointer Functions

    Get PDF
    In 1986, Saks and Wigderson conjectured that the largest separation between deterministic and zero-error randomized query complexity for a total boolean function is given by the function ff on n=2kn=2^k bits defined by a complete binary tree of NAND gates of depth kk, which achieves R0(f)=O(D(f)0.7537)R_0(f) = O(D(f)^{0.7537\ldots}). We show this is false by giving an example of a total boolean function ff on nn bits whose deterministic query complexity is Ω(n/log(n))\Omega(n/\log(n)) while its zero-error randomized query complexity is O~(n)\tilde O(\sqrt{n}). We further show that the quantum query complexity of the same function is O~(n1/4)\tilde O(n^{1/4}), giving the first example of a total function with a super-quadratic gap between its quantum and deterministic query complexities. We also construct a total boolean function gg on nn variables that has zero-error randomized query complexity Ω(n/log(n))\Omega(n/\log(n)) and bounded-error randomized query complexity R(g)=O~(n)R(g) = \tilde O(\sqrt{n}). This is the first super-linear separation between these two complexity measures. The exact quantum query complexity of the same function is QE(g)=O~(n)Q_E(g) = \tilde O(\sqrt{n}). These two functions show that the relations D(f)=O(R1(f)2)D(f) = O(R_1(f)^2) and R0(f)=O~(R(f)2)R_0(f) = \tilde O(R(f)^2) are optimal, up to poly-logarithmic factors. Further variations of these functions give additional separations between other query complexity measures: a cubic separation between QQ and R0R_0, a 3/23/2-power separation between QEQ_E and RR, and a 4th power separation between approximate degree and bounded-error randomized query complexity. All of these examples are variants of a function recently introduced by \goos, Pitassi, and Watson which they used to separate the unambiguous 1-certificate complexity from deterministic query complexity and to resolve the famous Clique versus Independent Set problem in communication complexity.Comment: 25 pages, 6 figures. Version 3 improves separation between Q_E and R_0 and updates reference

    A Composition Theorem for Randomized Query Complexity via Max-Conflict Complexity

    Get PDF
    For any relation f subseteq {0,1}^n x S and any partial Boolean function g:{0,1}^m -> {0,1,*}, we show that R_{1/3}(f o g^n) in Omega(R_{4/9}(f) * sqrt{R_{1/3}(g)})where R_epsilon(*) stands for the bounded-error randomized query complexity with error at most epsilon, and f o g^n subseteq ({0,1}^m)^n x S denotes the composition of f with n instances of g. The new composition theorem is optimal, at least, for the general case of relational problems: A relation f_0 and a partial Boolean function g_0 are constructed, such that R_{4/9}(f_0) in Theta(sqrt n), R_{1/3}(g_0)in Theta(n) and R_{1/3}(f_0 o g_0^n) in Theta(n). The theorem is proved via introducing a new complexity measure, max-conflict complexity, denoted by bar{chi}(*). Its investigation shows that bar{chi}(g) in Omega(sqrt{R_{1/3}(g)}) for any partial Boolean function g and R_{1/3}(f o g^n) in Omega(R_{4/9}(f) * bar{chi}(g)) for any relation f, which readily implies the composition statement. It is further shown that bar{chi}(g) is always at least as large as the sabotage complexity of g
    corecore