513 research outputs found

    Depth-Independent Lower bounds on the Communication Complexity of Read-Once Boolean Formulas

    Full text link
    We show lower bounds of Ω(n)\Omega(\sqrt{n}) and Ω(n1/4)\Omega(n^{1/4}) on the randomized and quantum communication complexity, respectively, of all nn-variable read-once Boolean formulas. Our results complement the recent lower bound of Ω(n/8d)\Omega(n/8^d) by Leonardos and Saks and Ω(n/2Ω(dlogd))\Omega(n/2^{\Omega(d\log d)}) by Jayram, Kopparty and Raghavendra for randomized communication complexity of read-once Boolean formulas with depth dd. We obtain our result by "embedding" either the Disjointness problem or its complement in any given read-once Boolean formula.Comment: 5 page

    Data Structures in Classical and Quantum Computing

    Get PDF
    This survey summarizes several results about quantum computing related to (mostly static) data structures. First, we describe classical data structures for the set membership and the predecessor search problems: Perfect Hash tables for set membership by Fredman, Koml\'{o}s and Szemer\'{e}di and a data structure by Beame and Fich for predecessor search. We also prove results about their space complexity (how many bits are required) and time complexity (how many bits have to be read to answer a query). After that, we turn our attention to classical data structures with quantum access. In the quantum access model, data is stored in classical bits, but they can be accessed in a quantum way: We may read several bits in superposition for unit cost. We give proofs for lower bounds in this setting that show that the classical data structures from the first section are, in some sense, asymptotically optimal - even in the quantum model. In fact, these proofs are simpler and give stronger results than previous proofs for the classical model of computation. The lower bound for set membership was proved by Radhakrishnan, Sen and Venkatesh and the result for the predecessor problem by Sen and Venkatesh. Finally, we examine fully quantum data structures. Instead of encoding the data in classical bits, we now encode it in qubits. We allow any unitary operation or measurement in order to answer queries. We describe one data structure by de Wolf for the set membership problem and also a general framework using fully quantum data structures in quantum walks by Jeffery, Kothari and Magniez

    Fourier Growth of Parity Decision Trees

    Get PDF
    We prove that for every parity decision tree of depth d on n variables, the sum of absolute values of Fourier coefficients at level ? is at most d^{?/2} ? O(? ? log(n))^?. Our result is nearly tight for small values of ? and extends a previous Fourier bound for standard decision trees by Sherstov, Storozhenko, and Wu (STOC, 2021). As an application of our Fourier bounds, using the results of Bansal and Sinha (STOC, 2021), we show that the k-fold Forrelation problem has (randomized) parity decision tree complexity ??(n^{1-1/k}), while having quantum query complexity ? k/2?. Our proof follows a random-walk approach, analyzing the contribution of a random path in the decision tree to the level-? Fourier expression. To carry the argument, we apply a careful cleanup procedure to the parity decision tree, ensuring that the value of the random walk is bounded with high probability. We observe that step sizes for the level-? walks can be computed by the intermediate values of level ? ?-1 walks, which calls for an inductive argument. Our approach differs from previous proofs of Tal (FOCS, 2020) and Sherstov, Storozhenko, and Wu (STOC, 2021) that relied on decompositions of the tree. In particular, for the special case of standard decision trees we view our proof as slightly simpler and more intuitive. In addition, we prove a similar bound for noisy decision trees of cost at most d - a model that was recently introduced by Ben-David and Blais (FOCS, 2020)

    A composition theorem for parity kill number

    Full text link
    In this work, we study the parity complexity measures Cmin[f]{\mathsf{C}^{\oplus}_{\min}}[f] and DT[f]{\mathsf{DT^{\oplus}}}[f]. Cmin[f]{\mathsf{C}^{\oplus}_{\min}}[f] is the \emph{parity kill number} of ff, the fewest number of parities on the input variables one has to fix in order to "kill" ff, i.e. to make it constant. DT[f]{\mathsf{DT^{\oplus}}}[f] is the depth of the shortest \emph{parity decision tree} which computes ff. These complexity measures have in recent years become increasingly important in the fields of communication complexity \cite{ZS09, MO09, ZS10, TWXZ13} and pseudorandomness \cite{BK12, Sha11, CT13}. Our main result is a composition theorem for Cmin{\mathsf{C}^{\oplus}_{\min}}. The kk-th power of ff, denoted fkf^{\circ k}, is the function which results from composing ff with itself kk times. We prove that if ff is not a parity function, then Cmin[fk]Ω(Cmin[f]k).{\mathsf{C}^{\oplus}_{\min}}[f^{\circ k}] \geq \Omega({\mathsf{C}_{\min}}[f]^{k}). In other words, the parity kill number of ff is essentially supermultiplicative in the \emph{normal} kill number of ff (also known as the minimum certificate complexity). As an application of our composition theorem, we show lower bounds on the parity complexity measures of Sortk\mathsf{Sort}^{\circ k} and HIk\mathsf{HI}^{\circ k}. Here Sort\mathsf{Sort} is the sort function due to Ambainis \cite{Amb06}, and HI\mathsf{HI} is Kushilevitz's hemi-icosahedron function \cite{NW95}. In doing so, we disprove a conjecture of Montanaro and Osborne \cite{MO09} which had applications to communication complexity and computational learning theory. In addition, we give new lower bounds for conjectures of \cite{MO09,ZS10} and \cite{TWXZ13}

    Separations in Query Complexity Based on Pointer Functions

    Get PDF
    In 1986, Saks and Wigderson conjectured that the largest separation between deterministic and zero-error randomized query complexity for a total boolean function is given by the function ff on n=2kn=2^k bits defined by a complete binary tree of NAND gates of depth kk, which achieves R0(f)=O(D(f)0.7537)R_0(f) = O(D(f)^{0.7537\ldots}). We show this is false by giving an example of a total boolean function ff on nn bits whose deterministic query complexity is Ω(n/log(n))\Omega(n/\log(n)) while its zero-error randomized query complexity is O~(n)\tilde O(\sqrt{n}). We further show that the quantum query complexity of the same function is O~(n1/4)\tilde O(n^{1/4}), giving the first example of a total function with a super-quadratic gap between its quantum and deterministic query complexities. We also construct a total boolean function gg on nn variables that has zero-error randomized query complexity Ω(n/log(n))\Omega(n/\log(n)) and bounded-error randomized query complexity R(g)=O~(n)R(g) = \tilde O(\sqrt{n}). This is the first super-linear separation between these two complexity measures. The exact quantum query complexity of the same function is QE(g)=O~(n)Q_E(g) = \tilde O(\sqrt{n}). These two functions show that the relations D(f)=O(R1(f)2)D(f) = O(R_1(f)^2) and R0(f)=O~(R(f)2)R_0(f) = \tilde O(R(f)^2) are optimal, up to poly-logarithmic factors. Further variations of these functions give additional separations between other query complexity measures: a cubic separation between QQ and R0R_0, a 3/23/2-power separation between QEQ_E and RR, and a 4th power separation between approximate degree and bounded-error randomized query complexity. All of these examples are variants of a function recently introduced by \goos, Pitassi, and Watson which they used to separate the unambiguous 1-certificate complexity from deterministic query complexity and to resolve the famous Clique versus Independent Set problem in communication complexity.Comment: 25 pages, 6 figures. Version 3 improves separation between Q_E and R_0 and updates reference

    Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations

    Get PDF
    Consider the following heuristic for building a decision tree for a function f:{0,1}n{±1}f : \{0,1\}^n \to \{\pm 1\}. Place the most influential variable xix_i of ff at the root, and recurse on the subfunctions fxi=0f_{x_i=0} and fxi=1f_{x_i=1} on the left and right subtrees respectively; terminate once the tree is an ε\varepsilon-approximation of ff. We analyze the quality of this heuristic, obtaining near-matching upper and lower bounds: \circ Upper bound: For every ff with decision tree size ss and every ε(0,12)\varepsilon \in (0,\frac1{2}), this heuristic builds a decision tree of size at most sO(log(s/ε)log(1/ε))s^{O(\log(s/\varepsilon)\log(1/\varepsilon))}. \circ Lower bound: For every ε(0,12)\varepsilon \in (0,\frac1{2}) and s2O~(n)s \le 2^{\tilde{O}(\sqrt{n})}, there is an ff with decision tree size ss such that this heuristic builds a decision tree of size sΩ~(logs)s^{\tilde{\Omega}(\log s)}. We also obtain upper and lower bounds for monotone functions: sO(logs/ε)s^{O(\sqrt{\log s}/\varepsilon)} and sΩ~(logs4)s^{\tilde{\Omega}(\sqrt[4]{\log s } )} respectively. The lower bound disproves conjectures of Fiat and Pechyony (2004) and Lee (2009). Our upper bounds yield new algorithms for properly learning decision trees under the uniform distribution. We show that these algorithms---which are motivated by widely employed and empirically successful top-down decision tree learning heuristics such as ID3, C4.5, and CART---achieve provable guarantees that compare favorably with those of the current fastest algorithm (Ehrenfeucht and Haussler, 1989). Our lower bounds shed new light on the limitations of these heuristics. Finally, we revisit the classic work of Ehrenfeucht and Haussler. We extend it to give the first uniform-distribution proper learning algorithm that achieves polynomial sample and memory complexity, while matching its state-of-the-art quasipolynomial runtime
    corecore