52,064 research outputs found
Statistical Learning of Arbitrary Computable Classifiers
Statistical learning theory chiefly studies restricted hypothesis classes,
particularly those with finite Vapnik-Chervonenkis (VC) dimension. The
fundamental quantity of interest is the sample complexity: the number of
samples required to learn to a specified level of accuracy. Here we consider
learning over the set of all computable labeling functions. Since the
VC-dimension is infinite and a priori (uniform) bounds on the number of samples
are impossible, we let the learning algorithm decide when it has seen
sufficient samples to have learned. We first show that learning in this setting
is indeed possible, and develop a learning algorithm. We then show, however,
that bounding sample complexity independently of the distribution is
impossible. Notably, this impossibility is entirely due to the requirement that
the learning algorithm be computable, and not due to the statistical nature of
the problem.Comment: Expanded the section on prior work and added reference
On sample complexity for computational pattern recognition
In statistical setting of the pattern recognition problem the number of
examples required to approximate an unknown labelling function is linear in the
VC dimension of the target learning class. In this work we consider the
question whether such bounds exist if we restrict our attention to computable
pattern recognition methods, assuming that the unknown labelling function is
also computable. We find that in this case the number of examples required for
a computable method to approximate the labelling function not only is not
linear, but grows faster (in the VC dimension of the class) than any computable
function. No time or space constraints are put on the predictors or target
functions; the only resource we consider is the training examples.
The task of pattern recognition is considered in conjunction with another
learning problem -- data compression. An impossibility result for the task of
data compression allows us to estimate the sample complexity for pattern
recognition
Learning pseudo-Boolean k-DNF and Submodular Functions
We prove that any submodular function f: {0,1}^n -> {0,1,...,k} can be
represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a
natural generalization of DNF representation for functions with integer range.
Each term in such a formula has an associated integral constant. We show that
an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all
constants associated with the terms of the formula are bounded.
This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to
pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership
queries under the uniform distribution for submodular functions of the form
f:{0,1}^n -> {0,1,...,k}. Our algorithm runs in time polynomial in n, k^{O(k
\log k / \epsilon)}, 1/\epsilon and log(1/\delta) and works even in the
agnostic setting. The line of previous work on learning submodular functions
[Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi,
Klivans, Kothari, Lee (SODA '12)] implies only n^{O(k)} query complexity for
learning submodular functions in this setting, for fixed epsilon and delta.
Our learning algorithm implies a property tester for submodularity of
functions f:{0,1}^n -> {0, ..., k} with query complexity polynomial in n for
k=O((\log n/ \loglog n)^{1/2}) and constant proximity parameter \epsilon
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor
- …