9 research outputs found
An Empirical Bayes Approach for Distributed Estimation of Spatial Fields
In this paper we consider a network of spatially distributed sensors which
collect measurement samples of a spatial field, and aim at estimating in a
distributed way (without any central coordinator) the entire field by suitably
fusing all network data. We propose a general probabilistic model that can
handle both partial knowledge of the physics generating the spatial field as
well as a purely data-driven inference. Specifically, we adopt an Empirical
Bayes approach in which the spatial field is modeled as a Gaussian Process,
whose mean function is described by means of parametrized equations. We
characterize the Empirical Bayes estimator when nodes are heterogeneous, i.e.,
perform a different number of measurements. Moreover, by exploiting the
sparsity of both the covariance and the (parametrized) mean function of the
Gaussian Process, we are able to design a distributed spatial field estimator.
We corroborate the theoretical results with two numerical simulations: a
stationary temperature field estimation in which the field is described by a
partial differential (heat) equation, and a data driven inference in which the
mean is parametrized by a cubic spline
Distributed Learning from Interactions in Social Networks
We consider a network scenario in which agents can evaluate each other
according to a score graph that models some interactions. The goal is to design
a distributed protocol, run by the agents, that allows them to learn their
unknown state among a finite set of possible values. We propose a Bayesian
framework in which scores and states are associated to probabilistic events
with unknown parameters and hyperparameters, respectively. We show that each
agent can learn its state by means of a local Bayesian classifier and a
(centralized) Maximum-Likelihood (ML) estimator of parameter-hyperparameter
that combines plain ML and Empirical Bayes approaches. By using tools from
graphical models, which allow us to gain insight on conditional dependencies of
scores and states, we provide a relaxed probabilistic model that ultimately
leads to a parameter-hyperparameter estimator amenable to distributed
computation. To highlight the appropriateness of the proposed relaxation, we
demonstrate the distributed estimators on a social interaction set-up for user
profiling.Comment: This submission is a shorter work (for conference publication) of a
more comprehensive paper, already submitted as arXiv:1706.04081 (under review
for journal publication). In this short submission only one social set-up is
considered and only one of the relaxed estimators is proposed. Moreover, the
exhaustive analysis, carried out in the longer manuscript, is completely
missing in this versio
A Partition-Based Implementation of the Relaxed ADMM for Distributed Convex Optimization over Lossy Networks
In this paper we propose a distributed implementation of the relaxed
Alternating Direction Method of Multipliers algorithm (R-ADMM) for optimization
of a separable convex cost function, whose terms are stored by a set of
interacting agents, one for each agent. Specifically the local cost stored by
each node is in general a function of both the state of the node and the states
of its neighbors, a framework that we refer to as `partition-based'
optimization. This framework presents a great flexibility and can be adapted to
a large number of different applications. We show that the partition-based
R-ADMM algorithm we introduce is linked to the relaxed Peaceman-Rachford
Splitting (R-PRS) operator which, historically, has been introduced in the
literature to find the zeros of sum of functions. Interestingly, making use of
non expansive operator theory, the proposed algorithm is shown to be provably
robust against random packet losses that might occur in the communication
between neighboring nodes. Finally, the effectiveness of the proposed algorithm
is confirmed by a set of compelling numerical simulations run over random
geometric graphs subject to i.i.d. random packet losses.Comment: Full version of the paper to be presented at Conference on Decision
and Control (CDC) 201
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
Analysis of Newton-Raphson consensus for multi-agent convex optimization under asynchronous and lossy communications
We extend a multi-agent convex-optimization algorithm named Newton-Raphson
consensus to a network scenario that involves directed, asynchronous and lossy
communications. We theoretically analyze the stability and performance of the
algorithm and, in particular, provide sufficient conditions that guarantee
local exponential convergence of the node-states to the global centralized
minimizer even in presence of packet losses. Finally, we complement the
theoretical analysis with numerical simulations that compare the performance of
the Newton-Raphson consensus against asynchronous implementations of
distributed subgradient methods on real datasets extracted from open-source
databases