1,887 research outputs found
Online Learning of k-CNF Boolean Functions
This paper revisits the problem of learning a k-CNF Boolean function from
examples in the context of online learning under the logarithmic loss. In doing
so, we give a Bayesian interpretation to one of Valiant's celebrated PAC
learning algorithms, which we then build upon to derive two efficient, online,
probabilistic, supervised learning algorithms for predicting the output of an
unknown k-CNF Boolean function. We analyze the loss of our methods, and show
that the cumulative log-loss can be upper bounded, ignoring logarithmic
factors, by a polynomial function of the size of each example.Comment: 20 LaTeX pages. 2 Algorithms. Some Theorem
PAC-Bayesian Theory Meets Bayesian Inference
We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the
Bayesian marginal likelihood. That is, for the negative log-likelihood loss
function, we show that the minimization of PAC-Bayesian generalization risk
bounds maximizes the Bayesian marginal likelihood. This provides an alternative
explanation to the Bayesian Occam's razor criteria, under the assumption that
the data is generated by an i.i.d distribution. Moreover, as the negative
log-likelihood is an unbounded loss function, we motivate and propose a
PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that
our approach is sound on classical Bayesian linear regression tasks.Comment: Published at NIPS 2015
(http://papers.nips.cc/paper/6569-pac-bayesian-theory-meets-bayesian-inference
- …