2,512 research outputs found

    Random Euler Complex-Valued Nonlinear Filters

    Full text link
    Over the last decade, both the neural network and kernel adaptive filter have successfully been used for nonlinear signal processing. However, they suffer from high computational cost caused by their complex/growing network structures. In this paper, we propose two random Euler filters for complex-valued nonlinear filtering problem, i.e., linear random Euler complex-valued filter (LRECF) and its widely-linear version (WLRECF), which possess a simple and fixed network structure. The transient and steady-state performances are studied in a non-stationary environment. The analytical minimum mean square error (MSE) and optimum step-size are derived. Finally, numerical simulations on complex-valued nonlinear system identification and nonlinear channel equalization are presented to show the effectiveness of the proposed methods

    Study of Set-Membership Kernel Adaptive Algorithms and Applications

    Full text link
    Adaptive algorithms based on kernel structures have been a topic of significant research over the past few years. The main advantage is that they form a family of universal approximators, offering an elegant solution to problems with nonlinearities. Nevertheless these methods deal with kernel expansions, creating a growing structure also known as dictionary, whose size depends on the number of new inputs. In this paper we derive the set-membership kernel-based normalized least-mean square (SM-NKLMS) algorithm, which is capable of limiting the size of the dictionary created in stationary environments. We also derive as an extension the set-membership kernelized affine projection (SM-KAP) algorithm. Finally several experiments are presented to compare the proposed SM-NKLMS and SM-KAP algorithms to the existing methods.Comment: 4 figures, 6 page

    Kernel Least Mean Square with Adaptive Kernel Size

    Full text link
    Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel size (bandwidth) is still an open important issue especially for learning with small sample sizes. In previous research, the kernel size was set manually or estimated in advance by Silvermans rule based on the sample distribution. This study aims to develop an online technique for optimizing the kernel size of the kernel least mean square (KLMS) algorithm. A sequential optimization strategy is proposed, and a new algorithm is developed, in which the filter weights and the kernel size are both sequentially updated by stochastic gradient algorithms that minimize the mean square error (MSE). Theoretical results on convergence are also presented. The excellent performance of the new algorithm is confirmed by simulations on static function estimation and short term chaotic time series prediction.Comment: 25 pages, 9 figures, and 4 table

    Parameterizing Region Covariance: An Efficient Way To Apply Sparse Codes On Second Order Statistics

    Full text link
    Sparse representations have been successfully applied to signal processing, computer vision and machine learning. Currently there is a trend to learn sparse models directly on structure data, such as region covariance. However, such methods when combined with region covariance often require complex computation. We present an approach to transform a structured sparse model learning problem to a traditional vectorized sparse modeling problem by constructing a Euclidean space representation for region covariance matrices. Our new representation has multiple advantages. Experiments on several vision tasks demonstrate competitive performance with the state-of-the-art methods

    Data-driven approximations of dynamical systems operators for control

    Full text link
    The Koopman and Perron Frobenius transport operators are fundamentally changing how we approach dynamical systems, providing linear representations for even strongly nonlinear dynamics. Although there is tremendous potential benefit of such a linear representation for estimation and control, transport operators are infinite-dimensional, making them difficult to work with numerically. Obtaining low-dimensional matrix approximations of these operators is paramount for applications, and the dynamic mode decomposition has quickly become a standard numerical algorithm to approximate the Koopman operator. Related methods have seen rapid development, due to a combination of an increasing abundance of data and the extensibility of DMD based on its simple framing in terms of linear algebra. In this chapter, we review key innovations in the data-driven characterization of transport operators for control, providing a high-level and unified perspective. We emphasize important recent developments around sparsity and control, and discuss emerging methods in big data and machine learning.Comment: 37 pages, 4 figure

    Online dictionary learning for kernel LMS. Analysis and forward-backward splitting algorithm

    Full text link
    Adaptive filtering algorithms operating in reproducing kernel Hilbert spaces have demonstrated superiority over their linear counterpart for nonlinear system identification. Unfortunately, an undesirable characteristic of these methods is that the order of the filters grows linearly with the number of input data. This dramatically increases the computational burden and memory requirement. A variety of strategies based on dictionary learning have been proposed to overcome this severe drawback. Few, if any, of these works analyze the problem of updating the dictionary in a time-varying environment. In this paper, we present an analytical study of the convergence behavior of the Gaussian least-mean-square algorithm in the case where the statistics of the dictionary elements only partially match the statistics of the input data. This allows us to emphasize the need for updating the dictionary in an online way, by discarding the obsolete elements and adding appropriate ones. We introduce a kernel least-mean-square algorithm with L1-norm regularization to automatically perform this task. The stability in the mean of this method is analyzed, and its performance is tested with experiments

    Gaussian Processes for Nonlinear Signal Processing

    Full text link
    Gaussian processes (GPs) are versatile tools that have been successfully employed to solve nonlinear estimation problems in machine learning, but that are rarely used in signal processing. In this tutorial, we present GPs for regression as a natural nonlinear extension to optimal Wiener filtering. After establishing their basic formulation, we discuss several important aspects and extensions, including recursive and adaptive algorithms for dealing with non-stationarity, low-complexity solutions, non-Gaussian noise models and classification scenarios. Furthermore, we provide a selection of relevant applications to wireless digital communications

    Quantized Minimum Error Entropy Criterion

    Full text link
    Comparing with traditional learning criteria, such as mean square error (MSE), the minimum error entropy (MEE) criterion is superior in nonlinear and non-Gaussian signal processing and machine learning. The argument of the logarithm in Renyis entropy estimator, called information potential (IP), is a popular MEE cost in information theoretic learning (ITL). The computational complexity of IP is however quadratic in terms of sample number due to double summation. This creates computational bottlenecks especially for large-scale datasets. To address this problem, in this work we propose an efficient quantization approach to reduce the computational burden of IP, which decreases the complexity from O(N*N) to O (MN) with M << N. The new learning criterion is called the quantized MEE (QMEE). Some basic properties of QMEE are presented. Illustrative examples are provided to verify the excellent performance of QMEE

    Kernel methods on spike train space for neuroscience: a tutorial

    Full text link
    Over the last decade several positive definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation.Comment: 12 pages, 8 figures, accepted in IEEE Signal Processing Magazin

    Study of Set-Membership Adaptive Kernel Algorithms

    Full text link
    In the last decade, a considerable research effort has been devoted to developing adaptive algorithms based on kernel functions. One of the main features of these algorithms is that they form a family of universal approximation techniques, solving problems with nonlinearities elegantly. In this paper, we present data-selective adaptive kernel normalized least-mean square (KNLMS) algorithms that can increase their learning rate and reduce their computational complexity. In fact, these methods deal with kernel expansions, creating a growing structure also known as the dictionary, whose size depends on the number of observations and their innovation. The algorithms described herein use an adaptive step-size to accelerate the learning and can offer an excellent tradeoff between convergence speed and steady state, which allows them to solve nonlinear filtering and estimation problems with a large number of parameters without requiring a large computational cost. The data-selective update scheme also limits the number of operations performed and the size of the dictionary created by the kernel expansion, saving computational resources and dealing with one of the major problems of kernel adaptive algorithms. A statistical analysis is carried out along with a computational complexity analysis of the proposed algorithms. Simulations show that the proposed KNLMS algorithms outperform existing algorithms in examples of nonlinear system identification and prediction of a time series originating from a nonlinear difference equation.Comment: 34 pages, 10 figure
    • …
    corecore