25 research outputs found

    Study of Set-Membership Kernel Adaptive Algorithms and Applications

    Full text link
    Adaptive algorithms based on kernel structures have been a topic of significant research over the past few years. The main advantage is that they form a family of universal approximators, offering an elegant solution to problems with nonlinearities. Nevertheless these methods deal with kernel expansions, creating a growing structure also known as dictionary, whose size depends on the number of new inputs. In this paper we derive the set-membership kernel-based normalized least-mean square (SM-NKLMS) algorithm, which is capable of limiting the size of the dictionary created in stationary environments. We also derive as an extension the set-membership kernelized affine projection (SM-KAP) algorithm. Finally several experiments are presented to compare the proposed SM-NKLMS and SM-KAP algorithms to the existing methods.Comment: 4 figures, 6 page

    Kernel Least Mean Square with Adaptive Kernel Size

    Full text link
    Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel size (bandwidth) is still an open important issue especially for learning with small sample sizes. In previous research, the kernel size was set manually or estimated in advance by Silvermans rule based on the sample distribution. This study aims to develop an online technique for optimizing the kernel size of the kernel least mean square (KLMS) algorithm. A sequential optimization strategy is proposed, and a new algorithm is developed, in which the filter weights and the kernel size are both sequentially updated by stochastic gradient algorithms that minimize the mean square error (MSE). Theoretical results on convergence are also presented. The excellent performance of the new algorithm is confirmed by simulations on static function estimation and short term chaotic time series prediction.Comment: 25 pages, 9 figures, and 4 table

    Speech Enhancement using Kernel and Normalized Kernel Affine Projection Algorithm

    Full text link
    The goal of this paper is to investigate the speech signal enhancement using Kernel Affine Projection Algorithm (KAPA) and Normalized KAPA. The removal of background noise is very important in many applications like speech recognition, telephone conversations, hearing aids, forensic, etc. Kernel adaptive filters shown good performance for removal of noise. If the evaluation of background noise is more slowly than the speech, i.e., noise signal is more stationary than the speech, we can easily estimate the noise during the pauses in speech. Otherwise it is more difficult to estimate the noise which results in degradation of speech. In order to improve the quality and intelligibility of speech, unlike time and frequency domains, we can process the signal in new domain like Reproducing Kernel Hilbert Space (RKHS) for high dimensional to yield more powerful nonlinear extensions. For experiments, we have used the database of noisy speech corpus (NOIZEUS). From the results, we observed the removal noise in RKHS has great performance in signal to noise ratio values in comparison with conventional adaptive filters

    Finite Dictionary Variants of the Diffusion KLMS Algorithm

    Full text link
    The diffusion based distributed learning approaches have been found to be a viable solution for learning over linearly separable datasets over a network. However, approaches till date are suitable for linearly separable datasets and need to be extended to scenarios in which we need to learn a non-linearity. In such scenarios, the recently proposed diffusion kernel least mean squares (KLMS) has been found to be performing better than diffusion least mean squares (LMS). The drawback of diffusion KLMS is that it requires infinite storage for observations (also called dictionary). This paper formulates the diffusion KLMS in a fixed budget setting such that the storage requirement is curtailed while maintaining appreciable performance in terms of convergence. Simulations have been carried out to validate the two newly proposed algorithms named as quantised diffusion KLMS (QDKLMS) and fixed budget diffusion KLMS (FBDKLMS) against KLMS, which indicate that both the proposed algorithms deliver better performance as compared to the KLMS while reducing the dictionary size storage requirement

    KLMAT: A Kernel Least Mean Absolute Third Algorithm

    Full text link
    In this paper, a kernel least mean absolute third (KLMAT) algorithm is developed for adaptive prediction. Combining the benefits of the kernel method and the least mean absolute third (LMAT) algorithm, the proposed KLMAT algorithm performs robustly against noise with different probability densities. To further enhance the convergence rate of the KLMAT algorithm, a variable step-size version (VSS-KLMAT algorithm) is proposed based on a Lorentzian function. Moreover, the stability and convergence property of the proposed algorithms are analyzed. Simulation results in the context of time series prediction demonstrate that the effectiveness of proposed algorithms.Comment: submitted to the journal in March, 17th, 201

    Random Euler Complex-Valued Nonlinear Filters

    Full text link
    Over the last decade, both the neural network and kernel adaptive filter have successfully been used for nonlinear signal processing. However, they suffer from high computational cost caused by their complex/growing network structures. In this paper, we propose two random Euler filters for complex-valued nonlinear filtering problem, i.e., linear random Euler complex-valued filter (LRECF) and its widely-linear version (WLRECF), which possess a simple and fixed network structure. The transient and steady-state performances are studied in a non-stationary environment. The analytical minimum mean square error (MSE) and optimum step-size are derived. Finally, numerical simulations on complex-valued nonlinear system identification and nonlinear channel equalization are presented to show the effectiveness of the proposed methods

    Generalized Gaussian Kernel Adaptive Filtering

    Full text link
    The present paper proposes generalized Gaussian kernel adaptive filtering, where the kernel parameters are adaptive and data-driven. The Gaussian kernel is parametrized by a center vector and a symmetric positive definite (SPD) precision matrix, which is regarded as a generalization of the scalar width parameter. These parameters are adaptively updated on the basis of a proposed least-square-type rule to minimize the estimation error. The main contribution of this paper is to establish update rules for precision matrices on the SPD manifold in order to keep their symmetric positive-definiteness. Different from conventional kernel adaptive filters, the proposed regressor is a superposition of Gaussian kernels with all different parameters, which makes such regressor more flexible. The kernel adaptive filtering algorithm is established together with a l1-regularized least squares to avoid overfitting and the increase of dimensionality of the dictionary. Experimental results confirm the validity of the proposed method

    Online dictionary learning for kernel LMS. Analysis and forward-backward splitting algorithm

    Full text link
    Adaptive filtering algorithms operating in reproducing kernel Hilbert spaces have demonstrated superiority over their linear counterpart for nonlinear system identification. Unfortunately, an undesirable characteristic of these methods is that the order of the filters grows linearly with the number of input data. This dramatically increases the computational burden and memory requirement. A variety of strategies based on dictionary learning have been proposed to overcome this severe drawback. Few, if any, of these works analyze the problem of updating the dictionary in a time-varying environment. In this paper, we present an analytical study of the convergence behavior of the Gaussian least-mean-square algorithm in the case where the statistics of the dictionary elements only partially match the statistics of the input data. This allows us to emphasize the need for updating the dictionary in an online way, by discarding the obsolete elements and adding appropriate ones. We introduce a kernel least-mean-square algorithm with L1-norm regularization to automatically perform this task. The stability in the mean of this method is analyzed, and its performance is tested with experiments

    Adaptive Learning in Cartesian Product of Reproducing Kernel Hilbert Spaces

    Full text link
    We propose a novel adaptive learning algorithm based on iterative orthogonal projections in the Cartesian product of multiple reproducing kernel Hilbert spaces (RKHSs). The task is estimating/tracking nonlinear functions which are supposed to contain multiple components such as (i) linear and nonlinear components, (ii) high- and low- frequency components etc. In this case, the use of multiple RKHSs permits a compact representation of multicomponent functions. The proposed algorithm is where two different methods of the author meet: multikernel adaptive filtering and the algorithm of hyperplane projection along affine subspace (HYPASS). In a certain particular case, the sum space of the RKHSs is isomorphic to the product space and hence the proposed algorithm can also be regarded as an iterative projection method in the sum space. The efficacy of the proposed algorithm is shown by numerical examples

    Study of Set-Membership Adaptive Kernel Algorithms

    Full text link
    In the last decade, a considerable research effort has been devoted to developing adaptive algorithms based on kernel functions. One of the main features of these algorithms is that they form a family of universal approximation techniques, solving problems with nonlinearities elegantly. In this paper, we present data-selective adaptive kernel normalized least-mean square (KNLMS) algorithms that can increase their learning rate and reduce their computational complexity. In fact, these methods deal with kernel expansions, creating a growing structure also known as the dictionary, whose size depends on the number of observations and their innovation. The algorithms described herein use an adaptive step-size to accelerate the learning and can offer an excellent tradeoff between convergence speed and steady state, which allows them to solve nonlinear filtering and estimation problems with a large number of parameters without requiring a large computational cost. The data-selective update scheme also limits the number of operations performed and the size of the dictionary created by the kernel expansion, saving computational resources and dealing with one of the major problems of kernel adaptive algorithms. A statistical analysis is carried out along with a computational complexity analysis of the proposed algorithms. Simulations show that the proposed KNLMS algorithms outperform existing algorithms in examples of nonlinear system identification and prediction of a time series originating from a nonlinear difference equation.Comment: 34 pages, 10 figure
    corecore