55,306 research outputs found

    A new kernel-based approach for overparameterized Hammerstein system identification

    Full text link
    In this paper we propose a new identification scheme for Hammerstein systems, which are dynamic systems consisting of a static nonlinearity and a linear time-invariant dynamic system in cascade. We assume that the nonlinear function can be described as a linear combination of pp basis functions. We reconstruct the pp coefficients of the nonlinearity together with the first nn samples of the impulse response of the linear system by estimating an npnp-dimensional overparameterized vector, which contains all the combinations of the unknown variables. To avoid high variance in these estimates, we adopt a regularized kernel-based approach and, in particular, we introduce a new kernel tailored for Hammerstein system identification. We show that the resulting scheme provides an estimate of the overparameterized vector that can be uniquely decomposed as the combination of an impulse response and pp coefficients of the static nonlinearity. We also show, through several numerical experiments, that the proposed method compares very favorably with two standard methods for Hammerstein system identification.Comment: 17 pages, submitted to IEEE Conference on Decision and Control 201

    Kernel-based methods for Volterra series identification

    Get PDF
    Volterra series approximate a broad range of nonlinear systems. Their identification is challenging due to the curse of dimensionality: the number of model parameters grows exponentially with the complexity of the input-output response. This fact limits the applicability of such models and has stimulated recently much research on regularized solutions. Along this line, we propose two new strategies that use kernel-based methods. First, we introduce the multiplicative polynomial kernel (MPK). Compared to the standard polynomial kernel, the MPK is equipped with a richer set of hyperparameters, increasing flexibility in selecting the monomials that really influence the system output. Second, we introduce the smooth exponentially decaying multiplicative polynomial kernel (SEDMPK), that is a regularized version of MPK which requires less hyperparameters, allowing to handle also high-order Volterra series. Numerical results show the effectiveness of the two approaches. (C) 2021 Elsevier Ltd. All rights reserved

    Regularized System Identification

    Get PDF
    This open access book provides a comprehensive treatment of recent developments in kernel-based identification that are of interest to anyone engaged in learning dynamic systems from data. The reader is led step by step into understanding of a novel paradigm that leverages the power of machine learning without losing sight of the system-theoretical principles of black-box identification. The authors’ reformulation of the identification problem in the light of regularization theory not only offers new insight on classical questions, but paves the way to new and powerful algorithms for a variety of linear and nonlinear problems. Regression methods such as regularization networks and support vector machines are the basis of techniques that extend the function-estimation problem to the estimation of dynamic models. Many examples, also from real-world applications, illustrate the comparative advantages of the new nonparametric approach with respect to classic parametric prediction error methods. The challenges it addresses lie at the intersection of several disciplines so Regularized System Identification will be of interest to a variety of researchers and practitioners in the areas of control systems, machine learning, statistics, and data science. This is an open access book

    Nonlinear process fault detection and identification using kernel PCA and kernel density estimation

    Get PDF
    Kernel principal component analysis (KPCA) is an effective and efficient technique for monitoring nonlinear processes. However, associating it with upper control limits (UCLs) based on the Gaussian distribution can deteriorate its performance. In this paper, the kernel density estimation (KDE) technique was used to estimate UCLs for KPCA-based nonlinear process monitoring. The monitoring performance of the resulting KPCA–KDE approach was then compared with KPCA, whose UCLs were based on the Gaussian distribution. Tests on the Tennessee Eastman process show that KPCA–KDE is more robust and provide better overall performance than KPCA with Gaussian assumption-based UCLs in both sensitivity and detection time. An efficient KPCA-KDE-based fault identification approach using complex step differentiation is also proposed

    Stochastic Behavior Analysis of the Gaussian Kernel Least-Mean-Square Algorithm

    Get PDF
    The kernel least-mean-square (KLMS) algorithm is a popular algorithm in nonlinear adaptive filtering due to its simplicity and robustness. In kernel adaptive filters, the statistics of the input to the linear filter depends on the parameters of the kernel employed. Moreover, practical implementations require a finite nonlinearity model order. A Gaussian KLMS has two design parameters, the step size and the Gaussian kernel bandwidth. Thus, its design requires analytical models for the algorithm behavior as a function of these two parameters. This paper studies the steady-state behavior and the transient behavior of the Gaussian KLMS algorithm for Gaussian inputs and a finite order nonlinearity model. In particular, we derive recursive expressions for the mean-weight-error vector and the mean-square-error. The model predictions show excellent agreement with Monte Carlo simulations in transient and steady state. This allows the explicit analytical determination of stability limits, and gives opportunity to choose the algorithm parameters a priori in order to achieve prescribed convergence speed and quality of the estimate. Design examples are presented which validate the theoretical analysis and illustrates its application
    • 

    corecore