10,147 research outputs found
Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case
We demonstrate that, in the classical non-stochastic regret minimization
problem with decisions, gains and losses to be respectively maximized or
minimized are fundamentally different. Indeed, by considering the additional
sparsity assumption (at each stage, at most decisions incur a nonzero
outcome), we derive optimal regret bounds of different orders. Specifically,
with gains, we obtain an optimal regret guarantee after stages of order
, so the classical dependency in the dimension is replaced by
the sparsity size. With losses, we provide matching upper and lower bounds of
order , which is decreasing in . Eventually, we also
study the bandit setting, and obtain an upper bound of order when outcomes are losses. This bound is proven to be optimal up to the
logarithmic factor
Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Relative to the large literature on upper bounds on complexity of convex
optimization, lesser attention has been paid to the fundamental hardness of
these problems. Given the extensive use of convex optimization in machine
learning and statistics, gaining an understanding of these complexity-theoretic
issues is important. In this paper, we study the complexity of stochastic
convex optimization in an oracle model of computation. We improve upon known
results and obtain tight minimax complexity estimates for various function
classes
Efficient Numerical Methods to Solve Sparse Linear Equations with Application to PageRank
In this paper, we propose three methods to solve the PageRank problem for the
transition matrices with both row and column sparsity. Our methods reduce the
PageRank problem to the convex optimization problem over the simplex. The first
algorithm is based on the gradient descent in L1 norm instead of the Euclidean
one. The second algorithm extends the Frank-Wolfe to support sparse gradient
updates. The third algorithm stands for the mirror descent algorithm with a
randomized projection. We proof converges rates for these methods for sparse
problems as well as numerical experiments support their effectiveness.Comment: 26 page
- …