4,449 research outputs found
Distributed Diffusion-based LMS for Node-Specific Parameter Estimation over Adaptive Networks
A distributed adaptive algorithm is proposed to solve a node-specific
parameter estimation problem where nodes are interested in estimating
parameters of local interest and parameters of global interest to the whole
network. To address the different node-specific parameter estimation problems,
this novel algorithm relies on a diffusion-based implementation of different
Least Mean Squares (LMS) algorithms, each associated with the estimation of a
specific set of local or global parameters. Although all the different LMS
algorithms are coupled, the diffusion-based implementation of each LMS
algorithm is exclusively undertaken by the nodes of the network interested in a
specific set of local or global parameters. To illustrate the effectiveness of
the proposed technique we provide simulation results in the context of
cooperative spectrum sensing in cognitive radio networks.Comment: 5 pages, 2 figures, Published in Proc. IEEE ICASSP, Florence, Italy,
May 201
On the Learning Behavior of Adaptive Networks - Part I: Transient Analysis
This work carries out a detailed transient analysis of the learning behavior
of multi-agent networks, and reveals interesting results about the learning
abilities of distributed strategies. Among other results, the analysis reveals
how combination policies influence the learning process of networked agents,
and how these policies can steer the convergence point towards any of many
possible Pareto optimal solutions. The results also establish that the learning
process of an adaptive network undergoes three (rather than two) well-defined
stages of evolution with distinctive convergence rates during the first two
stages, while attaining a finite mean-square-error (MSE) level in the last
stage. The analysis reveals what aspects of the network topology influence
performance directly and suggests design procedures that can optimize
performance by adjusting the relevant topology parameters. Interestingly, it is
further shown that, in the adaptation regime, each agent in a sparsely
connected network is able to achieve the same performance level as that of a
centralized stochastic-gradient strategy even for left-stochastic combination
strategies. These results lead to a deeper understanding and useful insights on
the convergence behavior of coupled distributed learners. The results also lead
to effective design mechanisms to help diffuse information more thoroughly over
networks.Comment: to appear in IEEE Transactions on Information Theory, 201
Half a billion simulations: evolutionary algorithms and distributed computing for calibrating the SimpopLocal geographical model
Multi-agent geographical models integrate very large numbers of spatial
interactions. In order to validate those models large amount of computing is
necessary for their simulation and calibration. Here a new data processing
chain including an automated calibration procedure is experimented on a
computational grid using evolutionary algorithms. This is applied for the first
time to a geographical model designed to simulate the evolution of an early
urban settlement system. The method enables us to reduce the computing time and
provides robust results. Using this method, we identify several parameter
settings that minimise three objective functions that quantify how closely the
model results match a reference pattern. As the values of each parameter in
different settings are very close, this estimation considerably reduces the
initial possible domain of variation of the parameters. The model is thus a
useful tool for further multiple applications on empirical historical
situations
Distributed Pareto Optimization via Diffusion Strategies
We consider solving multi-objective optimization problems in a distributed
manner by a network of cooperating and learning agents. The problem is
equivalent to optimizing a global cost that is the sum of individual
components. The optimizers of the individual components do not necessarily
coincide and the network therefore needs to seek Pareto optimal solutions. We
develop a distributed solution that relies on a general class of adaptive
diffusion strategies. We show how the diffusion process can be represented as
the cascade composition of three operators: two combination operators and a
gradient descent operator. Using the Banach fixed-point theorem, we establish
the existence of a unique fixed point for the composite cascade. We then study
how close each agent converges towards this fixed point, and also examine how
close the Pareto solution is to the fixed point. We perform a detailed
mean-square error analysis and establish that all agents are able to converge
to the same Pareto optimal solution within a sufficiently small
mean-square-error (MSE) bound even for constant step-sizes. We illustrate one
application of the theory to collaborative decision making in finance by a
network of agents.Comment: 35 pages, 9 figures, submitted for publicatio
Distributed Diffusion-Based LMS for Node-Specific Adaptive Parameter Estimation
A distributed adaptive algorithm is proposed to solve a node-specific
parameter estimation problem where nodes are interested in estimating
parameters of local interest, parameters of common interest to a subset of
nodes and parameters of global interest to the whole network. To address the
different node-specific parameter estimation problems, this novel algorithm
relies on a diffusion-based implementation of different Least Mean Squares
(LMS) algorithms, each associated with the estimation of a specific set of
local, common or global parameters. Coupled with the estimation of the
different sets of parameters, the implementation of each LMS algorithm is only
undertaken by the nodes of the network interested in a specific set of local,
common or global parameters. The study of convergence in the mean sense reveals
that the proposed algorithm is asymptotically unbiased. Moreover, a
spatial-temporal energy conservation relation is provided to evaluate the
steady-state performance at each node in the mean-square sense. Finally, the
theoretical results and the effectiveness of the proposed technique are
validated through computer simulations in the context of cooperative spectrum
sensing in Cognitive Radio networks.Comment: 13 pages, 6 figure
Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks
We propose an adaptive diffusion mechanism to optimize a global cost function
in a distributed manner over a network of nodes. The cost function is assumed
to consist of a collection of individual components. Diffusion adaptation
allows the nodes to cooperate and diffuse information in real-time; it also
helps alleviate the effects of stochastic gradient noise and measurement noise
through a continuous learning process. We analyze the mean-square-error
performance of the algorithm in some detail, including its transient and
steady-state behavior. We also apply the diffusion algorithm to two problems:
distributed estimation with sparse parameters and distributed localization.
Compared to well-studied incremental methods, diffusion methods do not require
the use of a cyclic path over the nodes and are robust to node and link
failure. Diffusion methods also endow networks with adaptation abilities that
enable the individual nodes to continue learning even when the cost function
changes with time. Examples involving such dynamic cost functions with moving
targets are common in the context of biological networks.Comment: 34 pages, 6 figures, to appear in IEEE Transactions on Signal
Processing, 201
Ecosystem-Oriented Distributed Evolutionary Computing
We create a novel optimisation technique inspired by natural ecosystems,
where the optimisation works at two levels: a first optimisation, migration of
genes which are distributed in a peer-to-peer network, operating continuously
in time; this process feeds a second optimisation based on evolutionary
computing that operates locally on single peers and is aimed at finding
solutions to satisfy locally relevant constraints. We consider from the domain
of computer science distributed evolutionary computing, with the relevant
theory from the domain of theoretical biology, including the fields of
evolutionary and ecological theory, the topological structure of ecosystems,
and evolutionary processes within distributed environments. We then define
ecosystem- oriented distributed evolutionary computing, imbibed with the
properties of self-organisation, scalability and sustainability from natural
ecosystems, including a novel form of distributed evolu- tionary computing.
Finally, we conclude with a discussion of the apparent compromises resulting
from the hybrid model created, such as the network topology.Comment: 8 pages, 5 figures. arXiv admin note: text overlap with
arXiv:1112.0204, arXiv:0712.4159, arXiv:0712.4153, arXiv:0712.4102,
arXiv:0910.067
- …