2,232 research outputs found
Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication
We study distributed optimization in networked systems, where nodes cooperate
to find the optimal quantity of common interest, x=x^\star. The objective
function of the corresponding optimization problem is the sum of private (known
only by a node,) convex, nodes' objectives and each node imposes a private
convex constraint on the allowed values of x. We solve this problem for generic
connected network topologies with asymmetric random link failures with a novel
distributed, decentralized algorithm. We refer to this algorithm as AL-G
(augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented
Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast
gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual
function. Dual variables are updated by the standard method of multipliers, at
a slow time scale. To update the primal variables, we propose a novel,
Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses
unidirectional gossip communication, only between immediate neighbors in the
network and is resilient to random link failures. For networks with reliable
communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian
broadcast gossiping) algorithm reduces communication, computation and data
storage cost. We prove convergence for all proposed algorithms and demonstrate
by simulations the effectiveness on two applications: l_1-regularized logistic
regression for classification and cooperative spectrum sensing for cognitive
radio networks.Comment: 28 pages, journal; revise
Distributed soft thresholding for sparse signal recovery
In this paper, we address the problem of distributed sparse recovery of
signals acquired via compressed measurements in a sensor network. We propose a
new class of distributed algorithms to solve Lasso regression problems, when
the communication to a fusion center is not possible, e.g., due to
communication cost or privacy reasons. More precisely, we introduce a
distributed iterative soft thresholding algorithm (DISTA) that consists of
three steps: an averaging step, a gradient step, and a soft thresholding
operation. We prove the convergence of DISTA in networks represented by regular
graphs, and we compare it with existing methods in terms of performance,
memory, and complexity.Comment: Revised version. Main improvements: extension of the convergence
theorem to regular graphs; new numerical results and comparisons with other
algorithm
Dynamical Optimal Transport on Discrete Surfaces
We propose a technique for interpolating between probability distributions on
discrete surfaces, based on the theory of optimal transport. Unlike previous
attempts that use linear programming, our method is based on a dynamical
formulation of quadratic optimal transport proposed for flat domains by Benamou
and Brenier [2000], adapted to discrete surfaces. Our structure-preserving
construction yields a Riemannian metric on the (finite-dimensional) space of
probability distributions on a discrete surface, which translates the so-called
Otto calculus to discrete language. From a practical perspective, our technique
provides a smooth interpolation between distributions on discrete surfaces with
less diffusion than state-of-the-art algorithms involving entropic
regularization. Beyond interpolation, we show how our discrete notion of
optimal transport extends to other tasks, such as distribution-valued Dirichlet
problems and time integration of gradient flows
- …