255 research outputs found

    A Survey of Quantum Learning Theory

    Get PDF
    This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.Comment: 26 pages LaTeX. v2: many small changes to improve the presentation. This version will appear as Complexity Theory Column in SIGACT News in June 2017. v3: fixed a small ambiguity in the definition of gamma(C) and updated a referenc

    Quantum Algorithms for Learning and Testing Juntas

    Full text link
    In this article we develop quantum algorithms for learning and testing juntas, i.e. Boolean functions which depend only on an unknown set of k out of n input variables. Our aim is to develop efficient algorithms: - whose sample complexity has no dependence on n, the dimension of the domain the Boolean functions are defined over; - with no access to any classical or quantum membership ("black-box") queries. Instead, our algorithms use only classical examples generated uniformly at random and fixed quantum superpositions of such classical examples; - which require only a few quantum examples but possibly many classical random examples (which are considered quite "cheap" relative to quantum examples). Our quantum algorithms are based on a subroutine FS which enables sampling according to the Fourier spectrum of f; the FS subroutine was used in earlier work of Bshouty and Jackson on quantum learning. Our results are as follows: - We give an algorithm for testing k-juntas to accuracy ϵ\epsilon that uses O(k/ϵ)O(k/\epsilon) quantum examples. This improves on the number of examples used by the best known classical algorithm. - We establish the following lower bound: any FS-based k-junta testing algorithm requires Ω(k)\Omega(\sqrt{k}) queries. - We give an algorithm for learning kk-juntas to accuracy ϵ\epsilon that uses O(ϵ1klogk)O(\epsilon^{-1} k\log k) quantum examples and O(2klog(1/ϵ))O(2^k \log(1/\epsilon)) random examples. We show that this learning algorithms is close to optimal by giving a related lower bound.Comment: 15 pages, 1 figure. Uses synttree package. To appear in Quantum Information Processin

    Two new results about quantum exact learning

    Get PDF
    We present two new results about exact learning by quantum computers. First, we show how to exactly learn a kk-Fourier-sparse nn-bit Boolean function from O(k1.5(logk)2)O(k^{1.5}(\log k)^2) uniform quantum examples for that function. This improves over the bound of Θ~(kn)\widetilde{\Theta}(kn) uniformly random classical examples (Haviv and Regev, CCC'15). Our main tool is an improvement of Chang's lemma for the special case of sparse functions. Second, we show that if a concept class C\mathcal{C} can be exactly learned using QQ quantum membership queries, then it can also be learned using O(Q2logQlogC)O\left(\frac{Q^2}{\log Q}\log|\mathcal{C}|\right) classical membership queries. This improves the previous-best simulation result (Servedio and Gortler, SICOMP'04) by a logQ\log Q-factor.Comment: v3: 21 pages. Small corrections and clarification

    Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations

    Get PDF
    Consider the following heuristic for building a decision tree for a function f:{0,1}n{±1}f : \{0,1\}^n \to \{\pm 1\}. Place the most influential variable xix_i of ff at the root, and recurse on the subfunctions fxi=0f_{x_i=0} and fxi=1f_{x_i=1} on the left and right subtrees respectively; terminate once the tree is an ε\varepsilon-approximation of ff. We analyze the quality of this heuristic, obtaining near-matching upper and lower bounds: \circ Upper bound: For every ff with decision tree size ss and every ε(0,12)\varepsilon \in (0,\frac1{2}), this heuristic builds a decision tree of size at most sO(log(s/ε)log(1/ε))s^{O(\log(s/\varepsilon)\log(1/\varepsilon))}. \circ Lower bound: For every ε(0,12)\varepsilon \in (0,\frac1{2}) and s2O~(n)s \le 2^{\tilde{O}(\sqrt{n})}, there is an ff with decision tree size ss such that this heuristic builds a decision tree of size sΩ~(logs)s^{\tilde{\Omega}(\log s)}. We also obtain upper and lower bounds for monotone functions: sO(logs/ε)s^{O(\sqrt{\log s}/\varepsilon)} and sΩ~(logs4)s^{\tilde{\Omega}(\sqrt[4]{\log s } )} respectively. The lower bound disproves conjectures of Fiat and Pechyony (2004) and Lee (2009). Our upper bounds yield new algorithms for properly learning decision trees under the uniform distribution. We show that these algorithms---which are motivated by widely employed and empirically successful top-down decision tree learning heuristics such as ID3, C4.5, and CART---achieve provable guarantees that compare favorably with those of the current fastest algorithm (Ehrenfeucht and Haussler, 1989). Our lower bounds shed new light on the limitations of these heuristics. Finally, we revisit the classic work of Ehrenfeucht and Haussler. We extend it to give the first uniform-distribution proper learning algorithm that achieves polynomial sample and memory complexity, while matching its state-of-the-art quasipolynomial runtime
    corecore