4,043 research outputs found
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Learning Kernel-Based Halfspaces with the Zero-One Loss
We describe and analyze a new algorithm for agnostically learning
kernel-based halfspaces with respect to the \emph{zero-one} loss function.
Unlike most previous formulations which rely on surrogate convex loss functions
(e.g. hinge-loss in SVM and log-loss in logistic regression), we provide finite
time/sample guarantees with respect to the more natural zero-one loss function.
The proposed algorithm can learn kernel-based halfspaces in worst-case time
\poly(\exp(L\log(L/\epsilon))), for \emph{any} distribution, where is a
Lipschitz constant (which can be thought of as the reciprocal of the margin),
and the learned classifier is worse than the optimal halfspace by at most
. We also prove a hardness result, showing that under a certain
cryptographic assumption, no algorithm can learn kernel-based halfspaces in
time polynomial in .Comment: This is a full version of the paper appearing in the 23rd
International Conference on Learning Theory (COLT 2010). Compared to the
previous arXiv version, this version contains some small corrections in the
proof of Lemma 3 and in appendix
Robustness and Regularization of Support Vector Machines
We consider regularized support vector machines (SVMs) and show that they are
precisely equivalent to a new robust optimization formulation. We show that
this equivalence of robust optimization and regularization has implications for
both algorithms, and analysis. In terms of algorithms, the equivalence suggests
more general SVM-like algorithms for classification that explicitly build in
protection to noise, and at the same time control overfitting. On the analysis
front, the equivalence of robustness and regularization, provides a robust
optimization interpretation for the success of regularized SVMs. We use the
this new robustness interpretation of SVMs to give a new proof of consistency
of (kernelized) SVMs, thus establishing robustness as the reason regularized
SVMs generalize well
Learning probability distributions generated by finite-state machines
We review methods for inference of probability distributions generated by probabilistic automata and related models for sequence generation. We focus on methods that can be proved to learn in the inference
in the limit and PAC formal models. The methods we review are state merging and state splitting methods for probabilistic deterministic automata and the recently developed spectral method for nondeterministic probabilistic automata. In both cases, we derive them from a high-level algorithm described in terms of the Hankel matrix of the distribution to be learned, given as an oracle, and then describe how to adapt that algorithm to account for the error introduced by a finite sample.Peer ReviewedPostprint (author's final draft
- …