276 research outputs found
Diffusion LMS Over Multitask Networks
The diffusion LMS algorithm has been extensively studied in recent years. This efficient strategy allows to address distributed optimization problems over networks in the case where nodes have to collaboratively estimate a single parameter vector. Nevertheless, there are several problems in practice that are multitask-oriented in the sense that the optimum parameter vector may not be the same for every node. This brings up the issue of studying the performance of the diffusion LMS algorithm when it is run, either intentionally or unintentionally, in a multitask environment. In this paper, we conduct a theoretical analysis on the stochastic behavior of diffusion LMS in the case where the single-task hypothesis is violated. We analyze the competing factors that influence the performance of diffusion LMS in the multitask environment, and which allow the algorithm to continue to deliver performance superior to non-cooperative strategies in some useful circumstances. We also propose an unsupervised clustering strategy that allows each node to select, via adaptive adjustments of combination weights, the neighboring nodes with which it can collaborate to estimate a common parameter vector. Simulations are presented to illustrate the theoretical results, and to demonstrate the efficiency of the proposed clustering strategy
Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization
In this work, we consider multitask learning problems where clusters of nodes
are interested in estimating their own parameter vector. Cooperation among
clusters is beneficial when the optimal models of adjacent clusters have a good
number of similar entries. We propose a fully distributed algorithm for solving
this problem. The approach relies on minimizing a global mean-square error
criterion regularized by non-differentiable terms to promote cooperation among
neighboring clusters. A general diffusion forward-backward splitting strategy
is introduced. Then, it is specialized to the case of sparsity promoting
regularizers. A closed-form expression for the proximal operator of a weighted
sum of -norms is derived to achieve higher efficiency. We also provide
conditions on the step-sizes that ensure convergence of the algorithm in the
mean and mean-square error sense. Simulations are conducted to illustrate the
effectiveness of the strategy
A Multitask Diffusion Strategy with Optimized Inter-Cluster Cooperation
We consider a multitask estimation problem where nodes in a network are
divided into several connected clusters, with each cluster performing a
least-mean-squares estimation of a different random parameter vector. Inspired
by the adapt-then-combine diffusion strategy, we propose a multitask diffusion
strategy whose mean stability can be ensured whenever individual nodes are
stable in the mean, regardless of the inter-cluster cooperation weights. In
addition, the proposed strategy is able to achieve an asymptotically unbiased
estimation, when the parameters have same mean. We also develop an
inter-cluster cooperation weights selection scheme that allows each node in the
network to locally optimize its inter-cluster cooperation weights. Numerical
results demonstrate that our approach leads to a lower average steady-state
network mean-square deviation, compared with using weights selected by various
other commonly adopted methods in the literature.Comment: 30 pages, 8 figures, submitted to IEEE Journal of Selected Topics in
Signal Processin
Adaptation and learning over networks for nonlinear system modeling
In this chapter, we analyze nonlinear filtering problems in distributed
environments, e.g., sensor networks or peer-to-peer protocols. In these
scenarios, the agents in the environment receive measurements in a streaming
fashion, and they are required to estimate a common (nonlinear) model by
alternating local computations and communications with their neighbors. We
focus on the important distinction between single-task problems, where the
underlying model is common to all agents, and multitask problems, where each
agent might converge to a different model due to, e.g., spatial dependencies or
other factors. Currently, most of the literature on distributed learning in the
nonlinear case has focused on the single-task case, which may be a strong
limitation in real-world scenarios. After introducing the problem and reviewing
the existing approaches, we describe a simple kernel-based algorithm tailored
for the multitask case. We evaluate the proposal on a simulated benchmark task,
and we conclude by detailing currently open problems and lines of research.Comment: To be published as a chapter in `Adaptive Learning Methods for
Nonlinear System Modeling', Elsevier Publishing, Eds. D. Comminiello and J.C.
Principe (2018
Multitask Diffusion Adaptation over Networks
Adaptive networks are suitable for decentralized inference tasks, e.g., to
monitor complex natural phenomena. Recent research works have intensively
studied distributed optimization problems in the case where the nodes have to
estimate a single optimum parameter vector collaboratively. However, there are
many important applications that are multitask-oriented in the sense that there
are multiple optimum parameter vectors to be inferred simultaneously, in a
collaborative manner, over the area covered by the network. In this paper, we
employ diffusion strategies to develop distributed algorithms that address
multitask problems by minimizing an appropriate mean-square error criterion
with -regularization. The stability and convergence of the algorithm in
the mean and in the mean-square sense is analyzed. Simulations are conducted to
verify the theoretical findings, and to illustrate how the distributed strategy
can be used in several useful applications related to spectral sensing, target
localization, and hyperspectral data unmixing.Comment: 29 pages, 11 figures, submitted for publicatio
Diffusion LMS for clustered multitask networks
Recent research works on distributed adaptive networks have intensively
studied the case where the nodes estimate a common parameter vector
collaboratively. However, there are many applications that are
multitask-oriented in the sense that there are multiple parameter vectors that
need to be inferred simultaneously. In this paper, we employ diffusion
strategies to develop distributed algorithms that address clustered multitask
problems by minimizing an appropriate mean-square error criterion with
-regularization. Some results on the mean-square stability and
convergence of the algorithm are also provided. Simulations are conducted to
illustrate the theoretical findings.Comment: 5 pages, 6 figures, submitted to ICASSP 201
Distributed Unmixing of Hyperspectral Data With Sparsity Constraint
Spectral unmixing (SU) is a data processing problem in hyperspectral remote
sensing. The significant challenge in the SU problem is how to identify
endmembers and their weights, accurately. For estimation of signature and
fractional abundance matrices in a blind problem, nonnegative matrix
factorization (NMF) and its developments are used widely in the SU problem. One
of the constraints which was added to NMF is sparsity constraint that was
regularized by L 1/2 norm. In this paper, a new algorithm based on distributed
optimization has been used for spectral unmixing. In the proposed algorithm, a
network including single-node clusters has been employed. Each pixel in
hyperspectral images considered as a node in this network. The distributed
unmixing with sparsity constraint has been optimized with diffusion LMS
strategy, and then the update equations for fractional abundance and signature
matrices are obtained. Simulation results based on defined performance metrics,
illustrate advantage of the proposed algorithm in spectral unmixing of
hyperspectral data compared with other methods. The results show that the AAD
and SAD of the proposed approach are improved respectively about 6 and 27
percent toward distributed unmixing in SNR=25dB.Comment: 6 pages, conference pape
- …