7 research outputs found
A Quantum Computational Learning Algorithm
An interesting classical result due to Jackson allows polynomial-time
learning of the function class DNF using membership queries. Since in most
practical learning situations access to a membership oracle is unrealistic,
this paper explores the possibility that quantum computation might allow a
learning algorithm for DNF that relies only on example queries. A natural
extension of Fourier-based learning into the quantum domain is presented. The
algorithm requires only an example oracle, and it runs in O(sqrt(2^n)) time, a
result that appears to be classically impossible. The algorithm is unique among
quantum algorithms in that it does not assume a priori knowledge of a function
and does not operate on a superposition that includes all possible states.Comment: This is a reworked and improved version of a paper originally
entitled "Quantum Harmonic Sieve: Learning DNF Using a Classical Example
Oracle
Specification and Simulation of Statistical Query Algorithms for Efficiency and Noise Tolerance
AbstractA recent innovation in computational learning theory is the statistical query (SQ) model. The advantage of specifying learning algorithms in this model is that SQ algorithms can be simulated in the probably approximately correct (PAC) model, both in the absenceandin the presence of noise. However, simulations of SQ algorithms in the PAC model have non-optimal time and sample complexities. In this paper, we introduce a new method for specifying statistical query algorithms based on a type ofrelative errorand provide simulations in the noise-free and noise-tolerant PAC models which yield more efficient algorithms. Requests for estimates of statistics in this new model take the following form: “Return an estimate of the statistic within a 1±μfactor, or return ⊥, promising that the statistic is less thanθ.” In addition to showing that this is a very natural language for specifying learning algorithms, we also show that this new specification is polynomially equivalent to standard SQ, and thus, known learnability and hardness results for statistical query learning are preserved. We then give highly efficient PAC simulations of relative error SQ algorithms. We show that the learning algorithms obtained by simulating efficient relative error SQ algorithms both in the absence of noise and in the presence of malicious noise have roughly optimal sample complexity. We also show that the simulation of efficient relative error SQ algorithms in the presence of classification noise yields learning algorithms at least as efficient as those obtained through standard methods, and in some cases improved, roughly optimal results are achieved. The sample complexities for all of these simulations are based on thedνmetric, which is a type of relative error metric useful for quantities which are small or even zero. We show that uniform convergence with respect to thedνmetric yields “uniform convergence” with respect to (μ, θ) accuracy. Finally, while we show that manyspecificlearning algorithms can be written as highly efficient relative error SQ algorithms, we also show, in fact, thatallSQ algorithms can be written efficiently by proving general upper bounds on the complexity of (μ, θ) queries as a function of the accuracy parameterε. As a consequence of this result, we give general upper bounds on the complexity of learning algorithms achieved through the use of relative error SQ algorithms and the simulations described above
General Bounds on Statistical Query Learning and PAC Learning with Noise via Hypothesis Boosting
AbstractWe derive general bounds on the complexity of learning in the statistical query (SQ) model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the SQ model. This new model was introduced by Kearns to provide a general framework for efficient PAC learning in the presence of classification noise. We first show a general scheme for boosting the accuracy of weak SQ learning algorithms, proving that weak SQ learning is equivalent to strong SQ learning. The boosting is efficient and is used to show our main result of the first general upper bounds on the complexity of strong SQ learning. Since all SQ algorithms can be simulated in the PAC model with classification noise, we also obtain general upper bounds on learning in the presence of classification noise for classes which can be learned in the SQ model
Noise tolerant algorithms for learning and searching
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (p. 109-112).by Javed Alexander Aslam.Ph.D