13 research outputs found
An Empirical Bayes Approach for Distributed Estimation of Spatial Fields
In this paper we consider a network of spatially distributed sensors which
collect measurement samples of a spatial field, and aim at estimating in a
distributed way (without any central coordinator) the entire field by suitably
fusing all network data. We propose a general probabilistic model that can
handle both partial knowledge of the physics generating the spatial field as
well as a purely data-driven inference. Specifically, we adopt an Empirical
Bayes approach in which the spatial field is modeled as a Gaussian Process,
whose mean function is described by means of parametrized equations. We
characterize the Empirical Bayes estimator when nodes are heterogeneous, i.e.,
perform a different number of measurements. Moreover, by exploiting the
sparsity of both the covariance and the (parametrized) mean function of the
Gaussian Process, we are able to design a distributed spatial field estimator.
We corroborate the theoretical results with two numerical simulations: a
stationary temperature field estimation in which the field is described by a
partial differential (heat) equation, and a data driven inference in which the
mean is parametrized by a cubic spline
A randomized primal distributed algorithm for partitioned and big-data non-convex optimization
In this paper we consider a distributed optimization scenario in which the
aggregate objective function to minimize is partitioned, big-data and possibly
non-convex. Specifically, we focus on a set-up in which the dimension of the
decision variable depends on the network size as well as the number of local
functions, but each local function handled by a node depends only on a (small)
portion of the entire optimization variable. This problem set-up has been shown
to appear in many interesting network application scenarios. As main paper
contribution, we develop a simple, primal distributed algorithm to solve the
optimization problem, based on a randomized descent approach, which works under
asynchronous gossip communication. We prove that the proposed asynchronous
algorithm is a proper, ad-hoc version of a coordinate descent method and thus
converges to a stationary point. To show the effectiveness of the proposed
algorithm, we also present numerical simulations on a non-convex quadratic
program, which confirm the theoretical results
Distributed Big-Data Optimization via Block Communications
We study distributed multi-agent large-scale optimization problems, wherein
the cost function is composed of a smooth possibly nonconvex sum-utility plus a
DC (Difference-of-Convex) regularizer. We consider the scenario where the
dimension of the optimization variables is so large that optimizing and/or
transmitting the entire set of variables could cause unaffordable computation
and communication overhead. To address this issue, we propose the first
distributed algorithm whereby agents optimize and communicate only a portion of
their local variables. The scheme hinges on successive convex approximation
(SCA) to handle the nonconvexity of the objective function, coupled with a
novel block-signal tracking scheme, aiming at locally estimating the average of
the agents' gradients. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Numerical results on a sparse regression
problem show the effectiveness of the proposed algorithm and the impact of the
block size on its practical convergence speed and communication cost
A Partition-Based Implementation of the Relaxed ADMM for Distributed Convex Optimization over Lossy Networks
In this paper we propose a distributed implementation of the relaxed
Alternating Direction Method of Multipliers algorithm (R-ADMM) for optimization
of a separable convex cost function, whose terms are stored by a set of
interacting agents, one for each agent. Specifically the local cost stored by
each node is in general a function of both the state of the node and the states
of its neighbors, a framework that we refer to as `partition-based'
optimization. This framework presents a great flexibility and can be adapted to
a large number of different applications. We show that the partition-based
R-ADMM algorithm we introduce is linked to the relaxed Peaceman-Rachford
Splitting (R-PRS) operator which, historically, has been introduced in the
literature to find the zeros of sum of functions. Interestingly, making use of
non expansive operator theory, the proposed algorithm is shown to be provably
robust against random packet losses that might occur in the communication
between neighboring nodes. Finally, the effectiveness of the proposed algorithm
is confirmed by a set of compelling numerical simulations run over random
geometric graphs subject to i.i.d. random packet losses.Comment: Full version of the paper to be presented at Conference on Decision
and Control (CDC) 201
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition
In this paper we consider a novel partitioned framework for distributed
optimization in peer-to-peer networks. In several important applications the
agents of a network have to solve an optimization problem with two key
features: (i) the dimension of the decision variable depends on the network
size, and (ii) cost function and constraints have a sparsity structure related
to the communication graph. For this class of problems a straightforward
application of existing consensus methods would show two inefficiencies: poor
scalability and redundancy of shared information. We propose an asynchronous
distributed algorithm, based on dual decomposition and coordinate methods, to
solve partitioned optimization problems. We show that, by exploiting the
problem structure, the solution can be partitioned among the nodes, so that
each node just stores a local copy of a portion of the decision variable
(rather than a copy of the entire decision vector) and solves a small-scale
local problem
Distributed big-data optimization via block communications
We study distributed multi-agent large-scale optimization problems, wherein the cost function is composed of a smooth possibly nonconvex sum-utility plus a DC (Difference-of-Convex) regularizer. We consider the scenario where the dimension of the optimization variables is so large that optimizing and/or transmitting the entire set of variables could cause unaffordable computation and communication overhead. To address this issue, we propose the first distributed algorithm whereby agents optimize and communicate only a portion of their local variables. The scheme hinges on successive convex approximation (SCA) to handle the nonconvexity of the objective function, coupled with a novel block- signal tracking scheme, aiming at locally estimating the average of the agents\u2019 gradients. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Numerical results on a sparse regression problem show the effectiveness of the proposed algorithm and the impact of the block size on its practical convergence speed and communication cost
Robust distributed linear programming
This paper presents a robust, distributed algorithm to solve general linear
programs. The algorithm design builds on the characterization of the solutions
of the linear program as saddle points of a modified Lagrangian function. We
show that the resulting continuous-time saddle-point algorithm is provably
correct but, in general, not distributed because of a global parameter
associated with the nonsmooth exact penalty function employed to encode the
inequality constraints of the linear program. This motivates the design of a
discontinuous saddle-point dynamics that, while enjoying the same convergence
guarantees, is fully distributed and scalable with the dimension of the
solution vector. We also characterize the robustness against disturbances and
link failures of the proposed dynamics. Specifically, we show that it is
integral-input-to-state stable but not input-to-state stable. The latter fact
is a consequence of a more general result, that we also establish, which states
that no algorithmic solution for linear programming is input-to-state stable
when uncertainty in the problem data affects the dynamics as a disturbance. Our
results allow us to establish the resilience of the proposed distributed
dynamics to disturbances of finite variation and recurrently disconnected
communication among the agents. Simulations in an optimal control application
illustrate the results