2,674 research outputs found
Crowdsourced PAC Learning under Classification Noise
In this paper, we analyze PAC learnability from labels produced by
crowdsourcing. In our setting, unlabeled examples are drawn from a distribution
and labels are crowdsourced from workers who operate under classification
noise, each with their own noise parameter. We develop an end-to-end
crowdsourced PAC learning algorithm that takes unlabeled data points as input
and outputs a trained classifier. Our three-step algorithm incorporates
majority voting, pure-exploration bandits, and noisy-PAC learning. We prove
several guarantees on the number of tasks labeled by workers for PAC learning
in this setting and show that our algorithm improves upon the baseline by
reducing the total number of tasks given to workers. We demonstrate the
robustness of our algorithm by exploring its application to additional
realistic crowdsourcing settings.Comment: 14 page
Predictive PAC Learning and Process Decompositions
We informally call a stochastic process learnable if it admits a
generalization error approaching zero in probability for any concept class with
finite VC-dimension (IID processes are the simplest example). A mixture of
learnable processes need not be learnable itself, and certainly its
generalization error need not decay at the same rate. In this paper, we argue
that it is natural in predictive PAC to condition not on the past observations
but on the mixture component of the sample path. This definition not only
matches what a realistic learner might demand, but also allows us to sidestep
several otherwise grave problems in learning from dependent data. In
particular, we give a novel PAC generalization bound for mixtures of learnable
processes with a generalization error that is not worse than that of each
mixture component. We also provide a characterization of mixtures of absolutely
regular (-mixing) processes, of independent probability-theoretic
interest.Comment: 9 pages, accepted in NIPS 201
Pac-Learning Recursive Logic Programs: Efficient Algorithms
We present algorithms that learn certain classes of function-free recursive
logic programs in polynomial time from equivalence queries. In particular, we
show that a single k-ary recursive constant-depth determinate clause is
learnable. Two-clause programs consisting of one learnable recursive clause and
one constant-depth determinate non-recursive clause are also learnable, if an
additional ``basecase'' oracle is assumed. These results immediately imply the
pac-learnability of these classes. Although these classes of learnable
recursive programs are very constrained, it is shown in a companion paper that
they are maximally general, in that generalizing either class in any natural
way leads to a computationally difficult learning problem. Thus, taken together
with its companion paper, this paper establishes a boundary of efficient
learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file
- …