16,724 research outputs found
Fast projections onto mixed-norm balls with applications
Joint sparsity offers powerful structural cues for feature selection,
especially for variables that are expected to demonstrate a "grouped" behavior.
Such behavior is commonly modeled via group-lasso, multitask lasso, and related
methods where feature selection is effected via mixed-norms. Several mixed-norm
based sparse models have received substantial attention, and for some cases
efficient algorithms are also available. Surprisingly, several constrained
sparse models seem to be lacking scalable algorithms. We address this
deficiency by presenting batch and online (stochastic-gradient) optimization
methods, both of which rely on efficient projections onto mixed-norm balls. We
illustrate our methods by applying them to the multitask lasso. We conclude by
mentioning some open problems.Comment: Preprint of paper under revie
Perron vector optimization applied to search engines
In the last years, Google's PageRank optimization problems have been
extensively studied. In that case, the ranking is given by the invariant
measure of a stochastic matrix. In this paper, we consider the more general
situation in which the ranking is determined by the Perron eigenvector of a
nonnegative, but not necessarily stochastic, matrix, in order to cover
Kleinberg's HITS algorithm. We also give some results for Tomlin's HOTS
algorithm. The problem consists then in finding an optimal outlink strategy
subject to design constraints and for a given search engine.
We study the relaxed versions of these problems, which means that we should
accept weighted hyperlinks. We provide an efficient algorithm for the
computation of the matrix of partial derivatives of the criterion, that uses
the low rank property of this matrix. We give a scalable algorithm that couples
gradient and power iterations and gives a local minimum of the Perron vector
optimization problem. We prove convergence by considering it as an approximate
gradient method.
We then show that optimal linkage stategies of HITS and HOTS optimization
problems verify a threshold property. We report numerical results on fragments
of the real web graph for these search engine optimization problems.Comment: 28 pages, 5 figure
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
The goal of decentralized optimization over a network is to optimize a global
objective formed by a sum of local (possibly nonsmooth) convex functions using
only local computation and communication. It arises in various application
domains, including distributed tracking and localization, multi-agent
co-ordination, estimation in sensor networks, and large-scale optimization in
machine learning. We develop and analyze distributed algorithms based on dual
averaging of subgradients, and we provide sharp bounds on their convergence
rates as a function of the network size and topology. Our method of analysis
allows for a clear separation between the convergence of the optimization
algorithm itself and the effects of communication constraints arising from the
network structure. In particular, we show that the number of iterations
required by our algorithm scales inversely in the spectral gap of the network.
The sharpness of this prediction is confirmed both by theoretical lower bounds
and simulations for various networks. Our approach includes both the cases of
deterministic optimization and communication, as well as problems with
stochastic optimization and/or communication.Comment: 40 pages, 4 figure
- …