13,188 research outputs found
Comparative Design-Choice Analysis of Color Refinement Algorithms Beyond the Worst Case
Color refinement is a crucial subroutine in symmetry detection in theory as well as practice. It has further applications in machine learning and in computational problems from linear algebra.
While tight lower bounds for the worst case complexity are known [Berkholz, Bonsma, Grohe, ESA2013] no comparative analysis of design choices for color refinement algorithms is available.
We devise two models within which we can compare color refinement algorithms using formal methods, an online model and an approximation model. We use these to show that no online algorithm is competitive beyond a logarithmic factor and no algorithm can approximate the optimal color refinement splitting scheme beyond a logarithmic factor.
We also directly compare strategies used in practice showing that, on some graphs, queue based strategies outperform stack based ones by a logarithmic factor and vice versa. Similar results hold for strategies based on priority queues
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Adaptive gradient methods have become recently very popular, in particular as
they have been shown to be useful in the training of deep neural networks. In
this paper we have analyzed RMSProp, originally proposed for the training of
deep neural networks, in the context of online convex optimization and show
-type regret bounds. Moreover, we propose two variants SC-Adagrad and
SC-RMSProp for which we show logarithmic regret bounds for strongly convex
functions. Finally, we demonstrate in the experiments that these new variants
outperform other adaptive gradient techniques or stochastic gradient descent in
the optimization of strongly convex functions as well as in training of deep
neural networks.Comment: ICML 2017, 16 pages, 23 figure
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Adaptive gradient methods have become recently very popular, in particular as
they have been shown to be useful in the training of deep neural networks. In
this paper we have analyzed RMSProp, originally proposed for the training of
deep neural networks, in the context of online convex optimization and show
-type regret bounds. Moreover, we propose two variants SC-Adagrad and
SC-RMSProp for which we show logarithmic regret bounds for strongly convex
functions. Finally, we demonstrate in the experiments that these new variants
outperform other adaptive gradient techniques or stochastic gradient descent in
the optimization of strongly convex functions as well as in training of deep
neural networks.Comment: ICML 2017, 16 pages, 23 figure
Competing with Gaussian linear experts
We study the problem of online regression. We prove a theoretical bound on
the square loss of Ridge Regression. We do not make any assumptions about input
vectors or outcomes. We also show that Bayesian Ridge Regression can be thought
of as an online algorithm competing with all the Gaussian linear experts
Online Isotonic Regression
We consider the online version of the isotonic regression problem. Given a
set of linearly ordered points (e.g., on the real line), the learner must
predict labels sequentially at adversarially chosen positions and is evaluated
by her total squared loss compared against the best isotonic (non-decreasing)
function in hindsight. We survey several standard online learning algorithms
and show that none of them achieve the optimal regret exponent; in fact, most
of them (including Online Gradient Descent, Follow the Leader and Exponential
Weights) incur linear regret. We then prove that the Exponential Weights
algorithm played over a covering net of isotonic functions has a regret bounded
by and present a matching
lower bound on regret. We provide a computationally efficient version of this
algorithm. We also analyze the noise-free case, in which the revealed labels
are isotonic, and show that the bound can be improved to or even to
(when the labels are revealed in isotonic order). Finally, we extend the
analysis beyond squared loss and give bounds for entropic loss and absolute
loss.Comment: 25 page
- …