253 research outputs found

    A Sparsity-Aware Adaptive Algorithm for Distributed Learning

    Get PDF
    In this paper, a sparsity-aware adaptive algorithm for distributed learning in diffusion networks is developed. The algorithm follows the set-theoretic estimation rationale. At each time instance and at each node of the network, a closed convex set, known as property set, is constructed based on the received measurements; this defines the region in which the solution is searched for. In this paper, the property sets take the form of hyperslabs. The goal is to find a point that belongs to the intersection of these hyperslabs. To this end, sparsity encouraging variable metric projections onto the hyperslabs have been adopted. Moreover, sparsity is also imposed by employing variable metric projections onto weighted â„“1\ell_1 balls. A combine adapt cooperation strategy is adopted. Under some mild assumptions, the scheme enjoys monotonicity, asymptotic optimality and strong convergence to a point that lies in the consensus subspace. Finally, numerical examples verify the validity of the proposed scheme, compared to other algorithms, which have been developed in the context of sparse adaptive learning

    Diffusion recursive least squares algorithm based on triangular decomposition

    Get PDF
    In this paper, diffusion strategies used by QR-decomposition based on recursive least squares algorithm (DQR-RLS) and the sign version of DQR-RLS algorithm (DQR-sRLS) are introduced for distributed networks. In terms of the QR-decomposition method and Cholesky factorization, a modified Kalman vector is given adaptively with the help of unitary rotation that can decrease the complexity from inverse autocorrelation matrix to vector. According to the diffusion strategies, combine-then-adapt (CTA) and adapt-then-combine (ATC) based on DQR-RLS and DQR-sRLS algorithms are proposed with the combination and adaptation steps. To minimize the cost function, diffused versions of CTA-DQR-RLS, ATC-DQR-RLS, CTA-DQR-sRLS and ATC-DiQR-sRLS algorithms are compared. Simulation results depict that the proposed DQR-RLS-based and DQR-sRLS-based algorithms can clearly achieve the better performance than the standard combine-then-adapt-diffusion RLS (CTA-DRLS) and ATC-DRLS mechanisms

    Study of L0-norm constraint normalized subband adaptive filtering algorithm

    Full text link
    Limited by fixed step-size and sparsity penalty factor, the conventional sparsity-aware normalized subband adaptive filtering (NSAF) type algorithms suffer from trade-off requirements of high filtering accurateness and quicker convergence behavior. To deal with this problem, this paper proposes variable step-size L0-norm constraint NSAF algorithms (VSS-L0-NSAFs) for sparse system identification. We first analyze mean-square-deviation (MSD) statistics behavior of the L0-NSAF algorithm innovatively in according to a novel recursion form and arrive at corresponding expressions for the cases that background noise variance is available and unavailable, where correlation degree of system input is indicated by scaling parameter r. Based on derivations, we develop an effective variable step-size scheme through minimizing the upper bounds of the MSD under some reasonable assumptions and lemma. To realize performance improvement, an effective reset strategy is incorporated into presented algorithms to tackle with non-stationary situations. Finally, numerical simulations corroborate that the proposed algorithms achieve better performance in terms of estimation accurateness and tracking capability in comparison with existing related algorithms in sparse system identification and adaptive echo cancellation circumstances.Comment: 15 pages,15 figure

    Sparsity-promoting adaptive algorithm for distributed learning in diffusion networks

    Get PDF
    In this paper, a sparsity-promoting adaptive algorithm for distributed learning in diffusion networks is developed. The algorithm follows the set-theoretic estimation rationale, i.e., at each time instant and at each node, a closed convex set, namely a hyperslab, is constructed around the current measurement point. This defines the region in which the solution lies. The algorithm seeks a solution in the intersection of these hyperslabs by a sequence of projections. Sparsity is encouraged in two complimentary ways: a) by employing extra projections onto a weighted â„“1 ball, that complies with our desire to constrain the respective weighted â„“1 norm and b) by adopting variable metric projections onto the hyperslabs, which implicitly quantify data mismatch. A combine-adapt cooperation strategy is adopted. Under some mild assumptions, the scheme enjoys a number of elegant convergence properties. Finally, numerical examples verify the validity of the proposed scheme, compared to other algorithms, which have been developed in the context of sparse adaptive learning.compared to other algorithms, which have been developed in the context of sparse adaptive learning

    Learning Edge Representations via Low-Rank Asymmetric Projections

    Full text link
    We propose a new method for embedding graphs while preserving directed edge information. Learning such continuous-space vector representations (or embeddings) of nodes in a graph is an important first step for using network information (from social networks, user-item graphs, knowledge bases, etc.) in many machine learning tasks. Unlike previous work, we (1) explicitly model an edge as a function of node embeddings, and we (2) propose a novel objective, the "graph likelihood", which contrasts information from sampled random walks with non-existent edges. Individually, both of these contributions improve the learned representations, especially when there are memory constraints on the total size of the embeddings. When combined, our contributions enable us to significantly improve the state-of-the-art by learning more concise representations that better preserve the graph structure. We evaluate our method on a variety of link-prediction task including social networks, collaboration networks, and protein interactions, showing that our proposed method learn representations with error reductions of up to 76% and 55%, on directed and undirected graphs. In addition, we show that the representations learned by our method are quite space efficient, producing embeddings which have higher structure-preserving accuracy but are 10 times smaller

    Affine Projection Algorithm Over Acoustic Sensor Networks for Active Noise Control

    Full text link
    [EN] Acoustic sensor networks (ASNs) are an effective solution to implement active noise control (ANC) systems by using distributed adaptive algorithms. On one hand, ASNs provide scalable systems where the signal processing load is distributed among the network nodes. On the other hand, their noise reduction performance is comparable to that of their respective centralized processing systems. In this sense, the distributed multiple error filtered-x least mean squares (DMEFxLMS) adaptive algorithm has shown to obtain the same performance than its centralized counterpart as long as there are no communications constraints in the underlying ASN. Regarding affine projection (AP) adaptive algorithms, some distributed approaches that are approximated versions of the multichannel filtered-x affine projection (MFxAP) algorithm have been previously proposed. These AP algorithms can efficiently share the processing load among the nodes, but at the expense of worsening their convergence properties. In this paper we develop the exact distributed multichannel filtered-x AP (EFxAP) algorithm, which obtains the same solution as that of the MFxAP algorithm as long as there are no communications constraints in the underlying ASN. In the EFxAP algorithm each node can compute a part or the entire inverse matrix needed by the centralized MFxAP algorithm. Thus, we propose three different strategies that obtain significant computational saving: 1) Gauss Elimination, 2) block LU factorization, and 3) matrix inversion lemma. As a result, each node computes only between 25%¿60% of the number of multiplications required by the direct inversion of the matrix. Regarding the performance in transient and steady states, the EFxAP exhibits the fastest convergence and the highest noise level reduction for any size of the acoustic network and any projection order of the AP algorithm compared to the DMEFxLMS and two previously reported distributed AP algorithms.This work was supported by EU together with Spanish Government through RTI2018-098085B-C41 (MINECO/FEDER) and Generalitat Valenciana through PROMETEO/2019/109.Ferrer Contreras, M.; Diego Antón, MD.; Piñero, G.; Gonzalez, A. (2021). Affine Projection Algorithm Over Acoustic Sensor Networks for Active Noise Control. IEEE/ACM Transactions on Audio Speech and Language Processing. 29:448-461. https://doi.org/10.1109/TASLP.2020.3042590S4484612

    Sparse Distributed Learning Based on Diffusion Adaptation

    Full text link
    This article proposes diffusion LMS strategies for distributed estimation over adaptive networks that are able to exploit sparsity in the underlying system model. The approach relies on convex regularization, common in compressive sensing, to enhance the detection of sparsity via a diffusive process over the network. The resulting algorithms endow networks with learning abilities and allow them to learn the sparse structure from the incoming data in real-time, and also to track variations in the sparsity of the model. We provide convergence and mean-square performance analysis of the proposed method and show under what conditions it outperforms the unregularized diffusion version. We also show how to adaptively select the regularization parameter. Simulation results illustrate the advantage of the proposed filters for sparse data recovery.Comment: to appear in IEEE Trans. on Signal Processing, 201

    Proportionate Recursive Maximum Correntropy Criterion Adaptive Filtering Algorithms and their Performance Analysis

    Full text link
    The maximum correntropy criterion (MCC) has been employed to design outlier-robust adaptive filtering algorithms, among which the recursive MCC (RMCC) algorithm is a typical one. Motivated by the success of our recently proposed proportionate recursive least squares (PRLS) algorithm for sparse system identification, we propose to introduce the proportionate updating (PU) mechanism into the RMCC, leading to two sparsity-aware RMCC algorithms: the proportionate recursive MCC (PRMCC) algorithm and the combinational PRMCC (CPRMCC) algorithm. The CPRMCC is implemented as an adaptive convex combination of two PRMCC filters. For PRMCC, its stability condition and mean-square performance were analyzed. Based on the analysis, optimal parameter selection in nonstationary environments was obtained. Performance study of CPRMCC was also provided and showed that the CPRMCC performs at least as well as the better component PRMCC filter in steady state. Numerical simulations of sparse system identification corroborate the advantage of proposed algorithms as well as the validity of theoretical analysis
    • …
    corecore