11 research outputs found

    Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization

    Full text link
    In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by non-differentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of 1\ell_1-norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy

    Distributed Estimation of Spatially Varying Distributed Parameter System

    Get PDF
    Adaptive filter shows a significant role in the field of digital signal processing and wireless communication. It integrates LMS algorithm in real time situations because of its low computational complexity and simplicity. The adaptive distributed strategy is built on the diffusion cooperation scheme among nodes at different locations that are dispersed over a wide topographical area. Computations have been performed in all the nodes and all the results are shared among them so as to obtain precise parameters of interest. There are some scenarios where estimation parameters vary over both space and time domains across the network. A set of basis functions i.e. Chebyshev polynomials is used to describe the space-varying nature of the parameters and diffusion least mean-squares strategy is proposed to recover these parameters. The parameters of our concern are assessed for both one dimensional and two dimensional networks. Stability and convergence of the proposed algorithm have been analysed and expressions are derived to predict the behavior. Network stochastic matrices are used to combine exchanged information between nodes. The results show that the performances of the networks also depend upon the combination matrices. The resulting algorithm is distributed, co-operative and able to respond to the real time changes in environmen

    A multiple beta wavelet-based locally regularized ultraorthogonal forward regression algorithm for time-varying system identification with applications to EEG

    Get PDF
    Time-varying (TV) nonlinear systems widely exist in various fields of engineering and science. Effective identification and modeling of TV systems is a challenging problem due to the nonstationarity and nonlinearity of the associated processes. In this paper, a novel parametric modeling algorithm is proposed to deal with this problem based on a TV nonlinear autoregressive with exogenous input (TV-NARX) model. A new class of multiple beta wavelet (MBW) basis functions is introduced to represent the TV coefficients of the TV-NARX model to enable the tracking of both smooth trends and sharp changes of the system behavior. To produce a parsimonious model structure, a locally regularized ultraorthogonal forward regression (LRUOFR) algorithm aided by the adjustable prediction error sum of squares (APRESS) criterion is investigated for sparse model term selection and parameter estimation. Simulation studies and a real application to EEG data show that the proposed MBW-LRUOFR algorithm can effectively capture the global and local features of nonstationary systems and obtain an optimal model, even for signals contaminated with severe colored noise

    Diffusion LMS Over Multitask Networks

    Get PDF
    The diffusion LMS algorithm has been extensively studied in recent years. This efficient strategy allows to address distributed optimization problems over networks in the case where nodes have to collaboratively estimate a single parameter vector. Nevertheless, there are several problems in practice that are multitask-oriented in the sense that the optimum parameter vector may not be the same for every node. This brings up the issue of studying the performance of the diffusion LMS algorithm when it is run, either intentionally or unintentionally, in a multitask environment. In this paper, we conduct a theoretical analysis on the stochastic behavior of diffusion LMS in the case where the single-task hypothesis is violated. We analyze the competing factors that influence the performance of diffusion LMS in the multitask environment, and which allow the algorithm to continue to deliver performance superior to non-cooperative strategies in some useful circumstances. We also propose an unsupervised clustering strategy that allows each node to select, via adaptive adjustments of combination weights, the neighboring nodes with which it can collaborate to estimate a common parameter vector. Simulations are presented to illustrate the theoretical results, and to demonstrate the efficiency of the proposed clustering strategy
    corecore