113,172 research outputs found
Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization
Majorization-minimization algorithms consist of iteratively minimizing a
majorizing surrogate of an objective function. Because of its simplicity and
its wide applicability, this principle has been very popular in statistics and
in signal processing. In this paper, we intend to make this principle scalable.
We introduce a stochastic majorization-minimization scheme which is able to
deal with large-scale or possibly infinite data sets. When applied to convex
optimization problems under suitable assumptions, we show that it achieves an
expected convergence rate of after iterations, and of
for strongly convex functions. Equally important, our scheme almost
surely converges to stationary points for a large class of non-convex problems.
We develop several efficient algorithms based on our framework. First, we
propose a new stochastic proximal gradient method, which experimentally matches
state-of-the-art solvers for large-scale -logistic regression. Second,
we develop an online DC programming algorithm for non-convex sparse estimation.
Finally, we demonstrate the effectiveness of our approach for solving
large-scale structured matrix factorization problems.Comment: accepted for publication for Neural Information Processing Systems
(NIPS) 2013. This is the 9-pages version followed by 16 pages of appendices.
The title has changed compared to the first technical repor
Robust distributed linear programming
This paper presents a robust, distributed algorithm to solve general linear
programs. The algorithm design builds on the characterization of the solutions
of the linear program as saddle points of a modified Lagrangian function. We
show that the resulting continuous-time saddle-point algorithm is provably
correct but, in general, not distributed because of a global parameter
associated with the nonsmooth exact penalty function employed to encode the
inequality constraints of the linear program. This motivates the design of a
discontinuous saddle-point dynamics that, while enjoying the same convergence
guarantees, is fully distributed and scalable with the dimension of the
solution vector. We also characterize the robustness against disturbances and
link failures of the proposed dynamics. Specifically, we show that it is
integral-input-to-state stable but not input-to-state stable. The latter fact
is a consequence of a more general result, that we also establish, which states
that no algorithmic solution for linear programming is input-to-state stable
when uncertainty in the problem data affects the dynamics as a disturbance. Our
results allow us to establish the resilience of the proposed distributed
dynamics to disturbances of finite variation and recurrently disconnected
communication among the agents. Simulations in an optimal control application
illustrate the results
- …