300 research outputs found

    Robust Reduced-Rank Adaptive Processing Based on Parallel Subgradient Projection and Krylov Subspace Techniques

    Full text link
    In this paper, we propose a novel reduced-rank adaptive filtering algorithm by blending the idea of the Krylov subspace methods with the set-theoretic adaptive filtering framework. Unlike the existing Krylov-subspace-based reduced-rank methods, the proposed algorithm tracks the optimal point in the sense of minimizing the \sinq{true} mean square error (MSE) in the Krylov subspace, even when the estimated statistics become erroneous (e.g., due to sudden changes of environments). Therefore, compared with those existing methods, the proposed algorithm is more suited to adaptive filtering applications. The algorithm is analyzed based on a modified version of the adaptive projected subgradient method (APSM). Numerical examples demonstrate that the proposed algorithm enjoys better tracking performance than the existing methods for the interference suppression problem in code-division multiple-access (CDMA) systems as well as for simple system identification problems.Comment: 10 figures. In IEEE Transactions on Signal Processing, 201

    A Sparsity-Aware Adaptive Algorithm for Distributed Learning

    Get PDF
    In this paper, a sparsity-aware adaptive algorithm for distributed learning in diffusion networks is developed. The algorithm follows the set-theoretic estimation rationale. At each time instance and at each node of the network, a closed convex set, known as property set, is constructed based on the received measurements; this defines the region in which the solution is searched for. In this paper, the property sets take the form of hyperslabs. The goal is to find a point that belongs to the intersection of these hyperslabs. To this end, sparsity encouraging variable metric projections onto the hyperslabs have been adopted. Moreover, sparsity is also imposed by employing variable metric projections onto weighted 1\ell_1 balls. A combine adapt cooperation strategy is adopted. Under some mild assumptions, the scheme enjoys monotonicity, asymptotic optimality and strong convergence to a point that lies in the consensus subspace. Finally, numerical examples verify the validity of the proposed scheme, compared to other algorithms, which have been developed in the context of sparse adaptive learning

    Distributed Adaptive Learning with Multiple Kernels in Diffusion Networks

    Full text link
    We propose an adaptive scheme for distributed learning of nonlinear functions by a network of nodes. The proposed algorithm consists of a local adaptation stage utilizing multiple kernels with projections onto hyperslabs and a diffusion stage to achieve consensus on the estimates over the whole network. Multiple kernels are incorporated to enhance the approximation of functions with several high and low frequency components common in practical scenarios. We provide a thorough convergence analysis of the proposed scheme based on the metric of the Cartesian product of multiple reproducing kernel Hilbert spaces. To this end, we introduce a modified consensus matrix considering this specific metric and prove its equivalence to the ordinary consensus matrix. Besides, the use of hyperslabs enables a significant reduction of the computational demand with only a minor loss in the performance. Numerical evaluations with synthetic and real data are conducted showing the efficacy of the proposed algorithm compared to the state of the art schemes.Comment: Double-column 15 pages, 10 figures, submitted to IEEE Trans. Signal Processin

    Robust Subspace Tracking Algorithms in Signal Processing: A Brief Survey

    Get PDF
    Principal component analysis (PCA) and subspace estimation (SE) are popular data analysis tools and used in a wide range of applications. The main interest in PCA/SE is for dimensionality reduction and low-rank approximation purposes. The emergence of big data streams have led to several essential issues for performing PCA/SE. Among them are (i) the size of such data streams increases over time, (ii) the underlying models may be time-dependent, and (iii) problem of dealing with the uncertainty and incompleteness in data. A robust variant of PCA/SE for such data streams, namely robust online PCA or robust subspace tracking (RST), has been introduced as a good alternative. The main goal of this paper is to provide a brief survey on recent RST algorithms in signal processing. Particularly, we begin this survey by introducing the basic ideas of the RST problem. Then, different aspects of RST are reviewed with respect to different kinds of non-Gaussian noises and sparse constraints. Our own contributions on this topic are also highlighted

    Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery

    Full text link
    PCA is one of the most widely used dimension reduction techniques. A related easier problem is "subspace learning" or "subspace estimation". Given relatively clean data, both are easily solved via singular value decomposition (SVD). The problem of subspace learning or PCA in the presence of outliers is called robust subspace learning or robust PCA (RPCA). For long data sequences, if one tries to use a single lower dimensional subspace to represent the data, the required subspace dimension may end up being quite large. For such data, a better model is to assume that it lies in a low-dimensional subspace that can change over time, albeit gradually. The problem of tracking such data (and the subspaces) while being robust to outliers is called robust subspace tracking (RST). This article provides a magazine-style overview of the entire field of robust subspace learning and tracking. In particular solutions for three problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition (S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an entire data vector is either an outlier or an inlier. The S+LR formulation instead assumes that outliers occur on only a few data vector indices and hence are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201

    Adaptive Robust Distributed Learning in Diffusion Sensor Networks

    Get PDF
    In this paper, the problem of adaptive distributed learning in diffusion networks is considered. The algorithms are developed within the convex set theoretic framework. More specifically, they are based on computationally simple geometric projections onto closed convex sets. The paper suggests a novel combine-project-adapt protocol for cooperation among the nodes of the network; such a protocol fits naturally with the philosophy that underlies the projection-based rationale. Moreover, the possibility that some of the nodes may fail is also considered and it is addressed by employing robust statistics loss functions. Such loss functions can easily be accommodated in the adopted algorithmic framework; all that is required from a loss function is convexity. Under some mild assumptions, the proposed algorithms enjoy monotonicity, asymptotic optimality, asymptotic consensus, strong convergence and linear complexity with respect to the number of unknown parameters. Finally, experiments in the context of the system-identification task verify the validity of the proposed algorithmic schemes, which are compared to other recent algorithms that have been developed for adaptive distributed learning

    Trading off Complexity With Communication Costs in Distributed Adaptive Learning via Krylov Subspaces for Dimensionality Reduction

    Get PDF
    In this paper, the problemof dimensionality reduction in adaptive distributed learning is studied. We consider a network obeying the ad-hoc topology, in which the nodes sense an amount of data and cooperate with each other, by exchanging information, in order to estimate an unknown, common, parameter vector. The algorithm, to be presented here, follows the set-theoretic estimation rationale; i.e., at each time instant and at each node of the network, a closed convex set is constructed based on the received measurements, and this defines the region in which the solution is searched for. In this paper, these closed convex sets, known as property sets, take the form of hyperslabs. Moreover, in order to reduce the number of transmitted coefficients, which is dictated by the dimension of the unknown vector, we seek for possible solutions in a subspace of lower dimension; the technique will be developed around the Krylov subspace rationale. Our goal is to find a point that belongs to the intersection of this infinite number of hyperslabs and the respective Krylov subspaces. This is achieved via a sequence of projections onto the property sets and the Krylov subspaces. The case of highly correlated inputs that degrades the performance of the algorithm is also considered. This is overcome via a transformation whichwhitens the input. The proposed schemes are brought in a decentralized form by adopting the combine-adapt cooperation strategy among the nodes. Full convergence analysis is carried out and numerical tests verify the validity of the proposed schemes in different scenarios in the context of the adaptive distributed system identification task

    Contributions to anomaly detection and correction in co-evolving data streams via subspace learning

    Get PDF
    During decades, estimation and detection tasks in many Signal Processing and Communications applications have been significantly improved by using subspace and component-based techniques. More recently, subspace methods have been adopted in many hot topics such as Machine Learning, Data Analytics or smart MIMO communications, in order to have a geometric interpretation of the problem. In that way, the Subspace-based algorithms often arise new approaches for already-explored problems, while offering the valuable advantage of giving interpretability to the procedures and solutions. On the other hand, in those recent hot topics, one may also find applications where the detection of unwanted or out-of-the-model artifacts and outliers is crucial. To this extend, we were previously working in the domain of GNSS PPP, detecting phase ambiguities, where we found motivation into the development of novel solutions for this application. After considering the applications and advantages of subspace-based approaches, this work will be focused on the exploration and extension of the ideas of subspace learning in the context of anomaly detection, where we show promising and original results in the areas of anomaly detection and subspace-based anomaly detection, in the form of two new algorithms: the Dual Ascent for Sparse Anomaly Detection and the Subspace-based Dual Ascent for Anomaly Detection and Tracking
    corecore