34,488 research outputs found

    Symmetry-assisted adversaries for quantum state generation

    Full text link
    We introduce a new quantum adversary method to prove lower bounds on the query complexity of the quantum state generation problem. This problem encompasses both, the computation of partial or total functions and the preparation of target quantum states. There has been hope for quite some time that quantum state generation might be a route to tackle the {\sc Graph Isomorphism} problem. We show that for the related problem of {\sc Index Erasure} our method leads to a lower bound of Ω(N)\Omega(\sqrt N) which matches an upper bound obtained via reduction to quantum search on NN elements. This closes an open problem first raised by Shi [FOCS'02]. Our approach is based on two ideas: (i) on the one hand we generalize the known additive and multiplicative adversary methods to the case of quantum state generation, (ii) on the other hand we show how the symmetries of the underlying problem can be leveraged for the design of optimal adversary matrices and dramatically simplify the computation of adversary bounds. Taken together, these two ideas give the new result for {\sc Index Erasure} by using the representation theory of the symmetric group. Also, the method can lead to lower bounds even for small success probability, contrary to the standard adversary method. Furthermore, we answer an open question due to \v{S}palek [CCC'08] by showing that the multiplicative version of the adversary method is stronger than the additive one for any problem. Finally, we prove that the multiplicative bound satisfies a strong direct product theorem, extending a result by \v{S}palek to quantum state generation problems.Comment: 35 pages, 5 figure

    Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower Bounds

    Full text link
    This paper proves the first super-logarithmic lower bounds on the cell probe complexity of dynamic boolean (a.k.a. decision) data structure problems, a long-standing milestone in data structure lower bounds. We introduce a new method for proving dynamic cell probe lower bounds and use it to prove a Ω~(log1.5n)\tilde{\Omega}(\log^{1.5} n) lower bound on the operational time of a wide range of boolean data structure problems, most notably, on the query time of dynamic range counting over F2\mathbb{F}_2 ([Pat07]). Proving an ω(lgn)\omega(\lg n) lower bound for this problem was explicitly posed as one of five important open problems in the late Mihai P\v{a}tra\c{s}cu's obituary [Tho13]. This result also implies the first ω(lgn)\omega(\lg n) lower bound for the classical 2D range counting problem, one of the most fundamental data structure problems in computational geometry and spatial databases. We derive similar lower bounds for boolean versions of dynamic polynomial evaluation and 2D rectangle stabbing, and for the (non-boolean) problems of range selection and range median. Our technical centerpiece is a new way of "weakly" simulating dynamic data structures using efficient one-way communication protocols with small advantage over random guessing. This simulation involves a surprising excursion to low-degree (Chebychev) polynomials which may be of independent interest, and offers an entirely new algorithmic angle on the "cell sampling" method of Panigrahy et al. [PTW10]

    A New Quantum Lower Bound Method, with Applications to Direct Product Theorems and Time-Space Tradeoffs

    Full text link
    We give a new version of the adversary method for proving lower bounds on quantum query algorithms. The new method is based on analyzing the eigenspace structure of the problem at hand. We use it to prove a new and optimal strong direct product theorem for 2-sided error quantum algorithms computing k independent instances of a symmetric Boolean function: if the algorithm uses significantly less than k times the number of queries needed for one instance of the function, then its success probability is exponentially small in k. We also use the polynomial method to prove a direct product theorem for 1-sided error algorithms for k threshold functions with a stronger bound on the success probability. Finally, we present a quantum algorithm for evaluating solutions to systems of linear inequalities, and use our direct product theorems to show that the time-space tradeoff of this algorithm is close to optimal.Comment: 16 pages LaTeX. Version 2: title changed, proofs significantly cleaned up and made selfcontained. This version to appear in the proceedings of the STOC 06 conferenc

    Properly Learning Decision Trees with Queries Is NP-Hard

    Full text link
    We prove that it is NP-hard to properly PAC learn decision trees with queries, resolving a longstanding open problem in learning theory (Bshouty 1993; Guijarro-Lavin-Raghavan 1999; Mehta-Raghavan 2002; Feldman 2016). While there has been a long line of work, dating back to (Pitt-Valiant 1988), establishing the hardness of properly learning decision trees from random examples, the more challenging setting of query learners necessitates different techniques and there were no previous lower bounds. En route to our main result, we simplify and strengthen the best known lower bounds for a different problem of Decision Tree Minimization (Zantema-Bodlaender 2000; Sieling 2003). On a technical level, we introduce the notion of hardness distillation, which we study for decision tree complexity but can be considered for any complexity measure: for a function that requires large decision trees, we give a general method for identifying a small set of inputs that is responsible for its complexity. Our technique even rules out query learners that are allowed constant error. This contrasts with existing lower bounds for the setting of random examples which only hold for inverse-polynomial error. Our result, taken together with a recent almost-polynomial time query algorithm for properly learning decision trees under the uniform distribution (Blanc-Lange-Qiao-Tan 2022), demonstrates the dramatic impact of distributional assumptions on the problem.Comment: 41 pages, 10 figures, FOCS 202

    Approximating the Noise Sensitivity of a Monotone Boolean Function

    Get PDF
    The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure. The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone function (for example the class of unate functions consists of all the functions that can be made monotone by reorienting some of their coordinates [O\u27Donnell, 2014]). Specifically, we study the algorithmic problem of approximating NS_{delta}[f] for monotone f, given the promise that NS_{delta}[f] >= 1/n^{C} for constant C, and for delta in the range 1/n <= delta <= 1/2. For such f and delta, we give a randomized algorithm performing O((min(1,sqrt{n} delta log^{1.5} n))/(NS_{delta}[f]) poly (1/epsilon)) queries and approximating NS_{delta}[f] to within a multiplicative factor of (1 +/- epsilon). Given the same constraints on f and delta, we also prove a lower bound of Omega((min(1,sqrt{n} delta))/(NS_{delta}[f] * n^{xi})) on the query complexity of any algorithm that approximates NS_{delta}[f] to within any constant factor, where xi can be any positive constant. Thus, our algorithm\u27s query complexity is close to optimal in terms of its dependence on n. We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield new lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias

    Lower Bounds on Quantum Query and Learning Graph Complexities

    Get PDF
    In this thesis we study the power of quantum query algorithms and learning graphs; the latter essentially being very specialized quantum query algorithms themselves. We almost exclusively focus on proving lower bounds for these computational models. First, we study lower bounds on learning graph complexity. We consider two types of learning graphs: adaptive and, more restricted, non-adaptive learning graphs. We express both adaptive and non-adaptive learning graph complexities of Boolean-valued functions (i.e., decision problems) as semidefinite minimization problems, and derive their dual problems. For various functions, we construct feasible solutions to these dual problems, thereby obtaining lower bounds on the learning graph complexity of the functions. Most notably, we prove an almost optimal Omega(n^(9/7)/sqrt(log n)) lower bound on the non-adaptive learning graph complexity of the Triangle problem. We also prove an Omega(n^(1-2^(k-2)/(2^k-1))) lower bound on the adaptive learning graph complexity of the k-Distinctness problem, which matches the complexity of the best known quantum query algorithm for this problem. Second, we construct optimal adversary lower bounds for various decision problems. Our main procedure for constructing them is to embed the adversary matrix into a larger matrix whose properties are easier to analyze. This embedding procedure imposes certain requirements on the size of the input alphabet. We prove optimal Omega(n^(1/3)) adversary lower bounds for the Collision and Set Equality problems, provided that the alphabet size is at least Omega(n^2). An optimal lower bound for Collision was previously proven using the polynomial method, while our lower bound for Set Equality is new. (An optimal lower bound for Set Equality was also independently and at about the same time proven by Zhandry using the polynomial method [arXiv, 2013].) We compare the power of non-adaptive learning graphs and quantum query algorithms that only utilize the knowledge on the possible positions of certificates in the input string. To do that, we introduce a notion of a certificate structure of a decision problem. Using the adversary method and the dual formulation of the learning graph complexity, we show that, for every certificate structure, there exists a decision problem possessing this certificate structure such that its non-adaptive learning graph and quantum query complexities differ by at most a constant multiplicative factor. For a special case of certificate structures, we construct a relatively general class of problems having this property. This construction generalizes the adversary lower bound for the k-Sum problem derived recently by Belovs and Spalek [ACM ITCS, 2013]. We also construct an optimal Omega(n^(2/3)) adversary lower bound for the Element Distinctness problem with minimal non-trivial alphabet size, which equals the length of the input. Due to the strict requirement on the alphabet size, here we cannot use the embedding procedure, and the construction of the adversary matrix heavily relies on the representation theory of the symmetric group. While an optimal lower bound for Element Distinctness using the polynomial method had been proven for any input alphabet, an optimal adversary construction was previously only known for alphabets of size at least Omega(n^2). Finally, we introduce the Enhanced Find-Two problem and we study its query complexity. The Enhanced Find-Two problem is, given n elements such that exactly k of them are marked, find two distinct marked elements using the following resources: (1) one initial copy of the uniform superposition over all marked elements, (2) an oracle that reflects across this superposition, and (3) an oracle that tests if an element is marked. This relational problem arises in the study of quantum proofs of knowledge. We prove that its query complexity is Theta(min{sqrt(n/k),sqrt(k)})

    Lower Bounds on Quantum Query Complexity

    Full text link
    Shor's and Grover's famous quantum algorithms for factoring and searching show that quantum computers can solve certain computational problems significantly faster than any classical computer. We discuss here what quantum computers_cannot_ do, and specifically how to prove limits on their computational power. We cover the main known techniques for proving lower bounds, and exemplify and compare the methods.Comment: survey, 23 page
    corecore