1,188 research outputs found

    Robust Reduced-Rank Adaptive Processing Based on Parallel Subgradient Projection and Krylov Subspace Techniques

    Full text link
    In this paper, we propose a novel reduced-rank adaptive filtering algorithm by blending the idea of the Krylov subspace methods with the set-theoretic adaptive filtering framework. Unlike the existing Krylov-subspace-based reduced-rank methods, the proposed algorithm tracks the optimal point in the sense of minimizing the \sinq{true} mean square error (MSE) in the Krylov subspace, even when the estimated statistics become erroneous (e.g., due to sudden changes of environments). Therefore, compared with those existing methods, the proposed algorithm is more suited to adaptive filtering applications. The algorithm is analyzed based on a modified version of the adaptive projected subgradient method (APSM). Numerical examples demonstrate that the proposed algorithm enjoys better tracking performance than the existing methods for the interference suppression problem in code-division multiple-access (CDMA) systems as well as for simple system identification problems.Comment: 10 figures. In IEEE Transactions on Signal Processing, 201

    A Sparsity-Aware Adaptive Algorithm for Distributed Learning

    Get PDF
    In this paper, a sparsity-aware adaptive algorithm for distributed learning in diffusion networks is developed. The algorithm follows the set-theoretic estimation rationale. At each time instance and at each node of the network, a closed convex set, known as property set, is constructed based on the received measurements; this defines the region in which the solution is searched for. In this paper, the property sets take the form of hyperslabs. The goal is to find a point that belongs to the intersection of these hyperslabs. To this end, sparsity encouraging variable metric projections onto the hyperslabs have been adopted. Moreover, sparsity is also imposed by employing variable metric projections onto weighted β„“1\ell_1 balls. A combine adapt cooperation strategy is adopted. Under some mild assumptions, the scheme enjoys monotonicity, asymptotic optimality and strong convergence to a point that lies in the consensus subspace. Finally, numerical examples verify the validity of the proposed scheme, compared to other algorithms, which have been developed in the context of sparse adaptive learning

    The Convergence Guarantees of a Non-convex Approach for Sparse Recovery

    Full text link
    In the area of sparse recovery, numerous researches hint that non-convex penalties might induce better sparsity than convex ones, but up until now those corresponding non-convex algorithms lack convergence guarantees from the initial solution to the global optimum. This paper aims to provide performance guarantees of a non-convex approach for sparse recovery. Specifically, the concept of weak convexity is incorporated into a class of sparsity-inducing penalties to characterize the non-convexity. Borrowing the idea of the projected subgradient method, an algorithm is proposed to solve the non-convex optimization problem. In addition, a uniform approximate projection is adopted in the projection step to make this algorithm computationally tractable for large scale problems. The convergence analysis is provided in the noisy scenario. It is shown that if the non-convexity of the penalty is below a threshold (which is in inverse proportion to the distance between the initial solution and the sparse signal), the recovered solution has recovery error linear in both the step size and the noise term. Numerical simulations are implemented to test the performance of the proposed approach and verify the theoretical analysis.Comment: 33 pages, 7 figure

    Stochastic Subgradient Algorithms for Strongly Convex Optimization over Distributed Networks

    Full text link
    We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node, and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on the stochastic gradient descent (SGD) updates. Particularly, we use a carefully designed time-dependent weighted averaging of the SGD iterates, which yields a convergence rate of O(NNT)O\left(\frac{N\sqrt{N}}{T}\right) after TT gradient updates for each node on a network of NN nodes. We then show that after TT gradient oracle calls, the average SGD iterate achieves a mean square deviation (MSD) of O(NT)O\left(\frac{\sqrt{N}}{T}\right). This rate of convergence is optimal as it matches the performance lower bound up to constant terms. Similar to the SGD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SGD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets and widely studied network topologies

    Adaptive Robust Distributed Learning in Diffusion Sensor Networks

    Get PDF
    In this paper, the problem of adaptive distributed learning in diffusion networks is considered. The algorithms are developed within the convex set theoretic framework. More specifically, they are based on computationally simple geometric projections onto closed convex sets. The paper suggests a novel combine-project-adapt protocol for cooperation among the nodes of the network; such a protocol fits naturally with the philosophy that underlies the projection-based rationale. Moreover, the possibility that some of the nodes may fail is also considered and it is addressed by employing robust statistics loss functions. Such loss functions can easily be accommodated in the adopted algorithmic framework; all that is required from a loss function is convexity. Under some mild assumptions, the proposed algorithms enjoy monotonicity, asymptotic optimality, asymptotic consensus, strong convergence and linear complexity with respect to the number of unknown parameters. Finally, experiments in the context of the system-identification task verify the validity of the proposed algorithmic schemes, which are compared to other recent algorithms that have been developed for adaptive distributed learning
    • …
    corecore