59 research outputs found
A PAC-Bayesian Analysis of Graph Clustering and Pairwise Clustering
We formulate weighted graph clustering as a prediction problem: given a
subset of edge weights we analyze the ability of graph clustering to predict
the remaining edge weights. This formulation enables practical and theoretical
comparison of different approaches to graph clustering as well as comparison of
graph clustering with other possible ways to model the graph. We adapt the
PAC-Bayesian analysis of co-clustering (Seldin and Tishby, 2008; Seldin, 2009)
to derive a PAC-Bayesian generalization bound for graph clustering. The bound
shows that graph clustering should optimize a trade-off between empirical data
fit and the mutual information that clusters preserve on the graph nodes. A
similar trade-off derived from information-theoretic considerations was already
shown to produce state-of-the-art results in practice (Slonim et al., 2005;
Yom-Tov and Slonim, 2009). This paper supports the empirical evidence by
providing a better theoretical foundation, suggesting formal generalization
guarantees, and offering a more accurate way to deal with finite sample issues.
We derive a bound minimization algorithm and show that it provides good results
in real-life problems and that the derived PAC-Bayesian bound is reasonably
tight
Factored Bandits
We introduce the factored bandits model, which is a framework for learning
with limited (bandit) feedback, where actions can be decomposed into a
Cartesian product of atomic actions. Factored bandits incorporate rank-1
bandits as a special case, but significantly relax the assumptions on the form
of the reward function. We provide an anytime algorithm for stochastic factored
bandits and up to constants matching upper and lower regret bounds for the
problem. Furthermore, we show that with a slight modification the proposed
algorithm can be applied to utility based dueling bandits. We obtain an
improvement in the additive terms of the regret bound compared to state of the
art algorithms (the additive terms are dominating up to time horizons which are
exponential in the number of arms)
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits
We derive an algorithm that achieves the optimal (within constants)
pseudo-regret in both adversarial and stochastic multi-armed bandits without
prior knowledge of the regime and time horizon. The algorithm is based on
online mirror descent (OMD) with Tsallis entropy regularization with power
and reduced-variance loss estimators. More generally, we define an
adversarial regime with a self-bounding constraint, which includes stochastic
regime, stochastically constrained adversarial regime (Wei and Luo), and
stochastic regime with adversarial corruptions (Lykouris et al.) as special
cases, and show that the algorithm achieves logarithmic regret guarantee in
this regime and all of its special cases simultaneously with the adversarial
regret guarantee.} The algorithm also achieves adversarial and stochastic
optimality in the utility-based dueling bandit setting. We provide empirical
evaluation of the algorithm demonstrating that it significantly outperforms
UCB1 and EXP3 in stochastic environments. We also provide examples of
adversarial environments, where UCB1 and Thompson Sampling exhibit almost
linear regret, whereas our algorithm suffers only logarithmic regret. To the
best of our knowledge, this is the first example demonstrating vulnerability of
Thompson Sampling in adversarial environments. Last, but not least, we present
a general stochastic analysis and a general adversarial analysis of OMD
algorithms with Tsallis entropy regularization for and explain
the reason why works best
Adaptation to Easy Data in Prediction with Limited Advice
We derive an online learning algorithm with improved regret guarantees for
`easy' loss sequences. We consider two types of `easiness': (a) stochastic loss
sequences and (b) adversarial loss sequences with small effective range of the
losses. While a number of algorithms have been proposed for exploiting small
effective range in the full information setting, Gerchinovitz and Lattimore
[2016] have shown the impossibility of regret scaling with the effective range
of the losses in the bandit setting. We show that just one additional
observation per round is sufficient to circumvent the impossibility result. The
proposed Second Order Difference Adjustments (SODA) algorithm requires no prior
knowledge of the effective range of the losses, , and achieves an
expected regret guarantee, where is the time horizon and is the number
of actions. The scaling with the effective loss range is achieved under
significantly weaker assumptions than those made by Cesa-Bianchi and Shamir
[2018] in an earlier attempt to circumvent the impossibility result. We also
provide a regret lower bound of , which almost
matches the upper bound. In addition, we show that in the stochastic setting
SODA achieves an pseudo-regret bound that holds simultaneously
with the adversarial regret guarantee. In other words, SODA is safe against an
unrestricted oblivious adversary and provides improved regret guarantees for at
least two different types of `easiness' simultaneously.Comment: Fixed a mistake in the proof and statement of Theorem
On PAC-Bayesian Bounds for Random Forests
Existing guarantees in terms of rigorous upper bounds on the generalization
error for the original random forest algorithm, one of the most frequently used
machine learning methods, are unsatisfying. We discuss and evaluate various
PAC-Bayesian approaches to derive such bounds. The bounds do not require
additional hold-out data, because the out-of-bag samples from the bagging in
the training process can be exploited. A random forest predicts by taking a
majority vote of an ensemble of decision trees. The first approach is to bound
the error of the vote by twice the error of the corresponding Gibbs classifier
(classifying with a single member of the ensemble selected at random). However,
this approach does not take into account the effect of averaging out of errors
of individual classifiers when taking the majority vote. This effect provides a
significant boost in performance when the errors are independent or negatively
correlated, but when the correlations are strong the advantage from taking the
majority vote is small. The second approach based on PAC-Bayesian C-bounds
takes dependencies between ensemble members into account, but it requires
estimating correlations between the errors of the individual classifiers. When
the correlations are high or the estimation is poor, the bounds degrade. In our
experiments, we compute generalization bounds for random forests on various
benchmark data sets. Because the individual decision trees already perform
well, their predictions are highly correlated and the C-bounds do not lead to
satisfactory results. For the same reason, the bounds based on the analysis of
Gibbs classifiers are typically superior and often reasonably tight. Bounds
based on a validation set coming at the cost of a smaller training set gave
better performance guarantees, but worse performance in most experiments
An Optimal Algorithm for Adversarial Bandits with Arbitrary Delays
We propose a new algorithm for adversarial multi-armed bandits with
unrestricted delays. The algorithm is based on a novel hybrid regularizer
applied in the Follow the Regularized Leader (FTRL) framework. It achieves
regret guarantee, where is the
number of arms, is the number of rounds, and is the total delay. The
result matches the lower bound within constants and requires no prior knowledge
of or . Additionally, we propose a refined tuning of the algorithm,
which achieves
regret guarantee, where is a set of rounds excluded from delay counting,
are the counted rounds, and is the total
delay in the counted rounds. If the delays are highly unbalanced, the latter
regret guarantee can be significantly tighter than the former. The result
requires no advance knowledge of the delays and resolves an open problem of
Thune et al. (2019). The new FTRL algorithm and its refined tuning are anytime
and require no doubling, which resolves another open problem of Thune et al.
(2019)
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits
We present two alternative ways to apply PAC-Bayesian analysis to sequences
of dependent random variables. The first is based on a new lemma that enables
to bound expectations of convex functions of certain dependent random variables
by expectations of the same functions of independent Bernoulli random
variables. This lemma provides an alternative tool to Hoeffding-Azuma
inequality to bound concentration of martingale values. Our second approach is
based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis.
We also introduce a way to apply PAC-Bayesian analysis in situation of limited
feedback. We combine the new tools to derive PAC-Bayesian generalization and
regret bounds for the multiarmed bandit problem. Although our regret bound is
not yet as tight as state-of-the-art regret bounds based on other
well-established techniques, our results significantly expand the range of
potential applications of PAC-Bayesian analysis and introduce a new analysis
tool to reinforcement learning and many other fields, where martingales and
limited feedback are encountered
- …