40,684 research outputs found
Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs
The paper considers gossip distributed estimation of a (static) distributed
random field (a.k.a., large scale unknown parameter vector) observed by
sparsely interconnected sensors, each of which only observes a small fraction
of the field. We consider linear distributed estimators whose structure
combines the information \emph{flow} among sensors (the \emph{consensus} term
resulting from the local gossiping exchange among sensors when they are able to
communicate) and the information \emph{gathering} measured by the sensors (the
\emph{sensing} or \emph{innovations} term.) This leads to mixed time scale
algorithms--one time scale associated with the consensus and the other with the
innovations. The paper establishes a distributed observability condition
(global observability plus mean connectedness) under which the distributed
estimates are consistent and asymptotically normal. We introduce the
distributed notion equivalent to the (centralized) Fisher information rate,
which is a bound on the mean square error reduction rate of any distributed
estimator; we show that under the appropriate modeling and structural network
communication conditions (gossip protocol) the distributed gossip estimator
attains this distributed Fisher information rate, asymptotically achieving the
performance of the optimal centralized estimator. Finally, we study the
behavior of the distributed gossip estimator when the measurements fade (noise
variance grows) with time; in particular, we consider the maximum rate at which
the noise variance can grow and still the distributed estimator being
consistent, by showing that, as long as the centralized estimator is
consistent, the distributed estimator remains consistent.Comment: Submitted for publication, 30 page
Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics
This paper focuses on the problem of recursive nonlinear least squares
parameter estimation in multi-agent networks, in which the individual agents
observe sequentially over time an independent and identically distributed
(i.i.d.) time-series consisting of a nonlinear function of the true but unknown
parameter corrupted by noise. A distributed recursive estimator of the
\emph{consensus} + \emph{innovations} type, namely , is
proposed, in which the agents update their parameter estimates at each
observation sampling epoch in a collaborative way by simultaneously processing
the latest locally sensed information~(\emph{innovations}) and the parameter
estimates from other agents~(\emph{consensus}) in the local neighborhood
conforming to a pre-specified inter-agent communication topology. Under rather
weak conditions on the connectivity of the inter-agent communication and a
\emph{global observability} criterion, it is shown that at every network agent,
the proposed algorithm leads to consistent parameter estimates. Furthermore,
under standard smoothness assumptions on the local observation functions, the
distributed estimator is shown to yield order-optimal convergence rates, i.e.,
as far as the order of pathwise convergence is concerned, the local parameter
estimates at each agent are as good as the optimal centralized nonlinear least
squares estimator which would require access to all the observations across all
the agents at all times. In order to benchmark the performance of the proposed
distributed estimator with that of the centralized nonlinear
least squares estimator, the asymptotic normality of the estimate sequence is
established and the asymptotic covariance of the distributed estimator is
evaluated. Finally, simulation results are presented which illustrate and
verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016,
Accepted: September 2016, To appear in IEEE Transactions on Signal and
Information Processing over Networks: Special Issue on Inference and Learning
over Network
Learning multifractal structure in large networks
Generating random graphs to model networks has a rich history. In this paper,
we analyze and improve upon the multifractal network generator (MFNG)
introduced by Palla et al. We provide a new result on the probability of
subgraphs existing in graphs generated with MFNG. From this result it follows
that we can quickly compute moments of an important set of graph properties,
such as the expected number of edges, stars, and cliques. Specifically, we show
how to compute these moments in time complexity independent of the size of the
graph and the number of recursive levels in the generative model. We leverage
this theory to a new method of moments algorithm for fitting large networks to
MFNG. Empirically, this new approach effectively simulates properties of
several social and information networks. In terms of matching subgraph counts,
our method outperforms similar algorithms used with the Stochastic Kronecker
Graph model. Furthermore, we present a fast approximation algorithm to generate
graph instances following the multi- fractal structure. The approximation
scheme is an improvement over previous methods, which ran in time complexity
quadratic in the number of vertices. Combined, our method of moments and fast
sampling scheme provide the first scalable framework for effectively modeling
large networks with MFNG
Distributed Linear Parameter Estimation: Asymptotically Efficient Adaptive Strategies
The paper considers the problem of distributed adaptive linear parameter
estimation in multi-agent inference networks. Local sensing model information
is only partially available at the agents and inter-agent communication is
assumed to be unpredictable. The paper develops a generic mixed time-scale
stochastic procedure consisting of simultaneous distributed learning and
estimation, in which the agents adaptively assess their relative observation
quality over time and fuse the innovations accordingly. Under rather weak
assumptions on the statistical model and the inter-agent communication, it is
shown that, by properly tuning the consensus potential with respect to the
innovation potential, the asymptotic information rate loss incurred in the
learning process may be made negligible. As such, it is shown that the agent
estimates are asymptotically efficient, in that their asymptotic covariance
coincides with that of a centralized estimator (the inverse of the centralized
Fisher information rate for Gaussian systems) with perfect global model
information and having access to all observations at all times. The proof
techniques are mainly based on convergence arguments for non-Markovian mixed
time scale stochastic approximation procedures. Several approximation results
developed in the process are of independent interest.Comment: Submitted to SIAM Journal on Control and Optimization journal.
Initial Submission: Sept. 2011. Revised: Aug. 201
Recursive quantum repeater networks
Internet-scale quantum repeater networks will be heterogeneous in physical
technology, repeater functionality, and management. The classical control
necessary to use the network will therefore face similar issues as Internet
data transmission. Many scalability and management problems that arose during
the development of the Internet might have been solved in a more uniform
fashion, improving flexibility and reducing redundant engineering effort.
Quantum repeater network development is currently at the stage where we risk
similar duplication when separate systems are combined. We propose a unifying
framework that can be used with all existing repeater designs. We introduce the
notion of a Quantum Recursive Network Architecture, developed from the emerging
classical concept of 'recursive networks', extending recursive mechanisms from
a focus on data forwarding to a more general distributed computing request
framework. Recursion abstracts independent transit networks as single relay
nodes, unifies software layering, and virtualizes the addresses of resources to
improve information hiding and resource management. Our architecture is useful
for building arbitrary distributed states, including fundamental distributed
states such as Bell pairs and GHZ, W, and cluster states.Comment: 14 page
- …