67,618 research outputs found
Accelerated Backpressure Algorithm
We develop an Accelerated Back Pressure (ABP) algorithm using Accelerated
Dual Descent (ADD), a distributed approximate Newton-like algorithm that only
uses local information. Our construction is based on writing the backpressure
algorithm as the solution to a network feasibility problem solved via
stochastic dual subgradient descent. We apply stochastic ADD in place of the
stochastic gradient descent algorithm. We prove that the ABP algorithm
guarantees stable queues. Our numerical experiments demonstrate a significant
improvement in convergence rate, especially when the packet arrival statistics
vary over time.Comment: 9 pages, 4 figures. A version of this work with significantly
extended proofs is being submitted for journal publicatio
Model Consistency for Learning with Mirror-Stratifiable Regularizers
Low-complexity non-smooth convex regularizers are routinely used to impose
some structure (such as sparsity or low-rank) on the coefficients for linear
predictors in supervised learning. Model consistency consists then in selecting
the correct structure (for instance support or rank) by regularized empirical
risk minimization.
It is known that model consistency holds under appropriate non-degeneracy
conditions. However such conditions typically fail for highly correlated
designs and it is observed that regularization methods tend to select larger
models.
In this work, we provide the theoretical underpinning of this behavior using
the notion of mirror-stratifiable regularizers. This class of regularizers
encompasses the most well-known in the literature, including the or
trace norms. It brings into play a pair of primal-dual models, which in turn
allows one to locate the structure of the solution using a specific dual
certificate.
We also show how this analysis is applicable to optimal solutions of the
learning problem, and also to the iterates computed by a certain class of
stochastic proximal-gradient algorithms.Comment: 14 pages, 4 figure
- …