38 research outputs found

    Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm

    Full text link
    As one of the recently proposed algorithms for sparse system identification, l0l_0 norm constraint Least Mean Square (l0l_0-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of l0l_0-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents all-around and throughout theoretical performance analysis of l0l_0-LMS for white Gaussian input data based on some reasonable assumptions. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between l0l_0-LMS and some previous arts and the sufficient conditions for l0l_0-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a large range of parameter setting.Comment: 31 pages, 8 figure

    On data-selective learning

    Get PDF
    Adaptive filters are applied in several electronic and communication devices like smartphones, advanced headphones, DSP chips, smart antenna, and teleconference systems. Also, they have application in many areas such as system identification, channel equalization, noise reduction, echo cancellation, interference cancellation, signal prediction, and stock market. Therefore, reducing the energy consumption of the adaptive filtering algorithms has great importance, particularly in green technologies and in devices using battery. In this thesis, data-selective adaptive filters, in particular the set-membership (SM) adaptive filters, are the tools to reach the goal. There are well known SM adaptive filters in literature. This work introduces new algorithms based on the classical ones in order to improve their performances and reduce the number of required arithmetic operations at the same time. Therefore, firstly, we analyze the robustness of the classical SM adaptive filtering algorithms. Secondly, we extend the SM technique to trinion and quaternion systems. Thirdly, by combining SM filtering and partialupdating, we introduce a new improved set-membership affine projection algorithm with constrained step size to improve its stability behavior. Fourthly, we propose some new least-mean-square (LMS) based and recursive least-squares based adaptive filtering algorithms with low computational complexity for sparse systems. Finally, we derive some feature LMS algorithms to exploit the hidden sparsity in the parameters.Filtros adaptativos são aplicados em diversos aparelhos eletrônicos e de comunicação, como smartphones, fone de ouvido avançados, DSP chips, antenas inteligentes e sistemas de teleconferência. Eles também têm aplicação em várias áreas como identificação de sistemas, equalização de canal, cancelamento de eco, cancelamento de interferência, previsão de sinal e mercado de ações. Desse modo, reduzir o consumo de energia de algoritmos adaptativos tem importância significativa, especialmente em tecnologias verdes e aparelhos que usam bateria. Nesta tese, filtros adaptativos com seleção de dados, em particular filtros adaptativos da família set-membership (SM), são apresentados para cumprir essa missão. No presente trabalho objetivamos apresentar novos algoritmos, baseados nos clássicos, a fim de aperfeiçoar seus desempenhos e, ao mesmo tempo, reduzir o número de operações aritméticas exigidas. Dessa forma, primeiro analisamos a robustez dos filtros adaptativos SM clássicos. Segundo, estendemos o SM aos números trinions e quaternions. Terceiro, foram utilizadas também duas famílias de algoritmos, SM filtering e partial-updating, de uma maneira elegante, visando reduzir energia ao máximo possível e obter um desempenho competitivo em termos de estabilidade. Quarto, a tese propõe novos filtros adaptativos baseado em algoritmos least-mean-square (LMS) e mínimos quadrados recursivos com complexidade computacional baixa para espaços esparsos. Finalmente, derivamos alguns algoritmos feature LMS para explorar a esparsidade escondida nos parâmetros

    ZA-APA with Adaptive Zero Attractor Controller for Variable Sparsity Environment

    Get PDF
    The zero attraction affine projection algorithm (ZA-APA) achieves better performance in terms of convergence rate and steady state error than standard APA when the system is sparse. It uses l1 norm penalty to exploit sparsity of the channel. The performance of ZA-APA depends on the value of zero attractor controller. Moreover a fixed attractor controller is not suitable for varying sparsity environment. This paper proposes an optimal adaptive zero attractor controller based on Mean Square Deviation (MSD) error to work in variable sparsity environment. Experiments were conducted to prove the suitability of the proposed algorithm for identification of unknown variable sparse system

    Transform Domain LMS/F Algorithms, Performance Analysis and Applications

    Get PDF

    Regularized Estimation of High-dimensional Covariance Matrices.

    Full text link
    Many signal processing methods are fundamentally related to the estimation of covariance matrices. In cases where there are a large number of covariates the dimension of covariance matrices is much larger than the number of available data samples. This is especially true in applications where data acquisition is constrained by limited resources such as time, energy, storage and bandwidth. This dissertation attempts to develop necessary components for covariance estimation in the high-dimensional setting. The dissertation makes contributions in two main areas of covariance estimation: (1) high dimensional shrinkage regularized covariance estimation and (2) recursive online complexity regularized estimation with applications of anomaly detection, graph tracking, and compressive sensing. New shrinkage covariance estimation methods are proposed that significantly outperform previous approaches in terms of mean squared error. Two multivariate data scenarios are considered: (1) independently Gaussian distributed data; and (2) heavy tailed elliptically contoured data. For the former scenario we improve on the Ledoit-Wolf (LW) shrinkage estimator using the principle of Rao-Blackwell conditioning and iterative approximation of the clairvoyant estimator. In the latter scenario, we apply a variance normalizing transformation and propose an iterative robust LW shrinkage estimator that is distribution-free within the elliptical family. The proposed robustified estimator is implemented via fixed point iterations with provable convergence and unique limit. A recursive online covariance estimator is proposed for tracking changes in an underlying time-varying graphical model. Covariance estimation is decomposed into multiple decoupled adaptive regression problems. A recursive recursive group lasso is derived using a homotopy approach that generalizes online lasso methods to group sparse system identification. By reducing the memory of the objective function this leads to a group lasso regularized LMS that provably dominates standard LMS. Finally, we introduce a state-of-the-art sampling system, the Modulated Wideband Converter (MWC) which is based on recently developed analog compressive sensing theory. By inferring the block-sparse structures of the high-dimensional covariance matrix from a set of random projections, the MWC is capable of achieving sub-Nyquist sampling for multiband signals with arbitrary carrier frequency over a wide bandwidth.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/86396/1/yilun_1.pd

    ROBUSTNESS ANALYSIS OF THE DATA-SELECTIVE VOLTERRA NLMS ALGORITHM

    Get PDF
    Recently, the data-selective adaptive Volterra filters have been proposed;however, up to now, there are not any theoretical analyses on its behavior rather than numerical simulations. Therefore, in this paper, we analyze the robustness (in the sense of l_2-stability) of the data-selective Volterra normalized least-mean-square (DSVNLMS) algorithm. First, we study the local robustness of this algorithm at any iteration, then we propose a global bound for the error/discrepancy in the coefficient vector. Also, we demonstrate that the DS-VNLMS algorithm improves the parameter estimation for the majority of the iterations that an update is implemented. Moreover, we also prove that if the noise bound is known, then we can set the DS-VNLMS so that it never degrades the estimate. The simulation results corroborate the validity of the executed analysis and demonstrate that the DS-VNLMS algorithm is robust against noise, no matter how its parameters are adopted
    corecore