749 research outputs found

    A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

    Full text link
    Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both non-adaptive and adaptive filter identification problems

    A New State-Regularized QRRLS Algorithm with Variable Forgetting Factor

    Get PDF
    published_or_final_versio

    Sparse Nonlinear MIMO Filtering and Identification

    Get PDF
    In this chapter system identification algorithms for sparse nonlinear multi input multi output (MIMO) systems are developed. These algorithms are potentially useful in a variety of application areas including digital transmission systems incorporating power amplifier(s) along with multiple antennas, cognitive processing, adaptive control of nonlinear multivariable systems, and multivariable biological systems. Sparsity is a key constraint imposed on the model. The presence of sparsity is often dictated by physical considerations as in wireless fading channel-estimation. In other cases it appears as a pragmatic modelling approach that seeks to cope with the curse of dimensionality, particularly acute in nonlinear systems like Volterra type series. Three dentification approaches are discussed: conventional identification based on both input and output samples, semi–blind identification placing emphasis on minimal input resources and blind identification whereby only output samples are available plus a–priori information on input characteristics. Based on this taxonomy a variety of algorithms, existing and new, are studied and evaluated by simulation

    Reweighted lp Constraint LMS-Based Adaptive Sparse Channel Estimation for Cooperative Communication System

    Get PDF
    This paper studies the issue of sparsity adaptive channel reconstruction in time-varying cooperative communication networks through the amplify-and-forward transmission scheme. A new sparsity adaptive system identification method is proposed, namely reweighted norm ( < < ) penalized least mean square(LMS)algorithm. The main idea of the algorithm is to add a norm penalty of sparsity into the cost function of the LMS algorithm. By doing so, the weight factor becomes a balance parameter of the associated norm adaptive sparse system identification. Subsequently, the steady state of the coefficient misalignment vector is derived theoretically, with a performance upper bounds provided which serve as a sufficient condition for the LMS channel estimation of the precise reweighted norm. With the upper bounds, we prove that the ( < < ) norm sparsity inducing cost function is superior to the reweighted norm. An optimal selection of for the norm problem is studied to recover various sparse channel vectors. Several experiments verify that the simulation results agree well with the theoretical analysis, and thus demonstrate that the proposed algorithm has a better convergence speed and better steady state behavior than other LMS algorithms

    Zero attracting recursive least squares algorithms

    Get PDF
    The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach

    Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation

    Full text link
    Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genome-wide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by least-squares based or kernel regression methods. To this end, compressed sampling (CS) approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties (RIP) are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel (weighted) adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotype-phenotype analysis.Comment: 20 pages, to appear in IEEE Trans. on Signal Processin

    A new transform-domain regularized recursive least M-estimate algorithm for a robust linear estimation

    Get PDF
    This brief proposes a new transform-domain (TD) regularized M-estimation (TD-R-ME) algorithm for a robust linear estimation in an impulsive noise environment and develops an efficient QR-decomposition-based algorithm for recursive implementation. By formulating the robust regularized linear estimation in transformed regression coefficients, the proposed TD-R-ME algorithm was found to offer better estimation accuracy than direct application of regularization techniques to estimate system coefficients when they are correlated. Furthermore, a QR-based algorithm and an effective adaptive method for selecting regularization parameters are developed for recursive implementation of the TD-R-ME algorithm. Simulation results show that the proposed TD regularized QR recursive least M-estimate (TD-R-QRRLM) algorithm offers improved performance over its least squares counterpart in an impulsive noise environment. Moreover, a TD smoothly clipped absolute deviation R-QRRLM was found to give a better steady-state excess mean square error than other QRRLM-related methods when regression coefficients are correlated. © 2006 IEEE.published_or_final_versio

    A New Variable Regularized Transform Domain NLMS Adaptive Filtering Algorithm-Acoustic Applications and Performance Analysis

    Get PDF
    published_or_final_versio

    High Dimensional Classification with combined Adaptive Sparse PLS and Logistic Regression

    Get PDF
    Motivation: The high dimensionality of genomic data calls for the development of specific classification methodologies, especially to prevent over-optimistic predictions. This challenge can be tackled by compression and variable selection, which combined constitute a powerful framework for classification, as well as data visualization and interpretation. However, current proposed combinations lead to instable and non convergent methods due to inappropriate computational frameworks. We hereby propose a stable and convergent approach for classification in high dimensional based on sparse Partial Least Squares (sparse PLS). Results: We start by proposing a new solution for the sparse PLS problem that is based on proximal operators for the case of univariate responses. Then we develop an adaptive version of the sparse PLS for classification, which combines iterative optimization of logistic regression and sparse PLS to ensure convergence and stability. Our results are confirmed on synthetic and experimental data. In particular we show how crucial convergence and stability can be when cross-validation is involved for calibration purposes. Using gene expression data we explore the prediction of breast cancer relapse. We also propose a multicategorial version of our method on the prediction of cell-types based on single-cell expression data. Availability: Our approach is implemented in the plsgenomics R-package.Comment: 9 pages, 3 figures, 4 tables + Supplementary Materials 8 pages, 3 figures, 10 table
    • …
    corecore