827 research outputs found

    Distributed Recursive Least-Squares: Stability and Performance Analysis

    Full text link
    The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged' system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying independence setting assumptions facilitate deriving accurate closed-form expressions for the MSE steady-state values. The problems of mean- and MSE-sense stability of D-RLS are also investigated, and easily-checkable sufficient conditions are derived under which a steady-state is attained. Without resorting to diminishing step-sizes which compromise the tracking ability of D-RLS, stability ensures that per sensor estimates hover inside a ball of finite radius centered at the true parameter vector, with high-probability, even when inter-sensor communication links are noisy. Interestingly, computer simulations demonstrate that the theoretical findings are accurate also in the pragmatic settings whereby sensors acquire temporally-correlated data.Comment: 30 pages, 4 figures, submitted to IEEE Transactions on Signal Processin

    A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

    Full text link
    Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both non-adaptive and adaptive filter identification problems

    Distributed Diffusion-Based LMS for Node-Specific Adaptive Parameter Estimation

    Full text link
    A distributed adaptive algorithm is proposed to solve a node-specific parameter estimation problem where nodes are interested in estimating parameters of local interest, parameters of common interest to a subset of nodes and parameters of global interest to the whole network. To address the different node-specific parameter estimation problems, this novel algorithm relies on a diffusion-based implementation of different Least Mean Squares (LMS) algorithms, each associated with the estimation of a specific set of local, common or global parameters. Coupled with the estimation of the different sets of parameters, the implementation of each LMS algorithm is only undertaken by the nodes of the network interested in a specific set of local, common or global parameters. The study of convergence in the mean sense reveals that the proposed algorithm is asymptotically unbiased. Moreover, a spatial-temporal energy conservation relation is provided to evaluate the steady-state performance at each node in the mean-square sense. Finally, the theoretical results and the effectiveness of the proposed technique are validated through computer simulations in the context of cooperative spectrum sensing in Cognitive Radio networks.Comment: 13 pages, 6 figure

    Mixed Regression via Approximate Message Passing

    Full text link
    We study the problem of regression in a generalized linear model (GLM) with multiple signals and latent variables. This model, which we call a matrix GLM, covers many widely studied problems in statistical learning, including mixed linear regression, max-affine regression, and mixture-of-experts. In mixed linear regression, each observation comes from one of LL signal vectors (regressors), but we do not know which one; in max-affine regression, each observation comes from the maximum of LL affine functions, each defined via a different signal vector. The goal in all these problems is to estimate the signals, and possibly some of the latent variables, from the observations. We propose a novel approximate message passing (AMP) algorithm for estimation in a matrix GLM and rigorously characterize its performance in the high-dimensional limit. This characterization is in terms of a state evolution recursion, which allows us to precisely compute performance measures such as the asymptotic mean-squared error. The state evolution characterization can be used to tailor the AMP algorithm to take advantage of any structural information known about the signals. Using state evolution, we derive an optimal choice of AMP `denoising' functions that minimizes the estimation error in each iteration. The theoretical results are validated by numerical simulations for mixed linear regression, max-affine regression, and mixture-of-experts. For max-affine regression, we propose an algorithm that combines AMP with expectation-maximization to estimate intercepts of the model along with the signals. The numerical results show that AMP significantly outperforms other estimators for mixed linear regression and max-affine regression in most parameter regimes.Comment: 44 pages. A shorter version of this paper will appear in the proceedings of AISTATS 202
    • …
    corecore