52 research outputs found
Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems
Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness
Distributed Convex Optimisation using the Alternating Direction Method of Multipliers (ADMM) in Lossy Scenarios
The Alternating Direction Method of Multipliers (ADMM) is an extensively studied algorithm suitable for solving convex distributed optimisation problems. This Thesis presents a formulation of the ADMM that is guaranteed to converge if the communications among agents are faulty and the agents perform updates asynchronously. With strongly convex costs, the proposed algorithm is shown to converge exponentially fast. The further extension to partition-based problems is presented
Communication-Efficient Algorithms For Distributed Optimization
This thesis is concerned with the design of distributed algorithms for
solving optimization problems. We consider networks where each node has
exclusive access to a cost function, and design algorithms that make all nodes
cooperate to find the minimum of the sum of all the cost functions. Several
problems in signal processing, control, and machine learning can be posed as
such optimization problems. Given that communication is often the most
energy-consuming operation in networks, it is important to design
communication-efficient algorithms. The main contributions of this thesis are a
classification scheme for distributed optimization and a set of corresponding
communication-efficient algorithms.
The class of optimization problems we consider is quite general, since each
function may depend on arbitrary components of the optimization variable, and
not necessarily on all of them. In doing so, we go beyond the common assumption
in distributed optimization and create additional structure that can be used to
reduce the number of communications. This structure is captured by our
classification scheme, which identifies easier instances of the problem, for
example the standard distributed optimization problem, where all functions
depend on all the components of the variable.
In our algorithms, no central node coordinates the network, all the
communications occur between neighboring nodes, and the data associated with
each node is processed locally. We show several applications including average
consensus, support vector machines, network flows, and several distributed
scenarios for compressed sensing. We also propose a new framework for
distributed model predictive control. Through extensive numerical experiments,
we show that our algorithms outperform prior distributed algorithms in terms of
communication-efficiency, even some that were specifically designed for a
particular application.Comment: Thesis defended on October 10, 2013. Dual PhD degree from Carnegie
Mellon University, PA, and Instituto Superior T\'ecnico, Lisbon, Portuga
- …