332 research outputs found

    Sparse Nonlinear MIMO Filtering and Identification

    Get PDF
    In this chapter system identification algorithms for sparse nonlinear multi input multi output (MIMO) systems are developed. These algorithms are potentially useful in a variety of application areas including digital transmission systems incorporating power amplifier(s) along with multiple antennas, cognitive processing, adaptive control of nonlinear multivariable systems, and multivariable biological systems. Sparsity is a key constraint imposed on the model. The presence of sparsity is often dictated by physical considerations as in wireless fading channel-estimation. In other cases it appears as a pragmatic modelling approach that seeks to cope with the curse of dimensionality, particularly acute in nonlinear systems like Volterra type series. Three dentification approaches are discussed: conventional identification based on both input and output samples, semi–blind identification placing emphasis on minimal input resources and blind identification whereby only output samples are available plus a–priori information on input characteristics. Based on this taxonomy a variety of algorithms, existing and new, are studied and evaluated by simulation

    Stochastic mean-square performance analysis of an adaptive Hammerstein filter

    Get PDF
    Journal ArticleAbstract-This paper presents an almost sure mean-square performance analysis of an adaptive Hammerstein filter for the case when the measurement noise in the desired response signal is a martingale difference sequence. The system model consists of a series connection of a memoryless nonlinearity followed by a recursive linear filter. A bound for the long-term time average of the squared a posteriori estimation error of the adaptive filter is derived using a basic set of assumptions on the operating environment. This bound consists of two terms, one of which is proportional to a parameter that depends on the step size sequences of the algorithm and the other that is inversely proportional to the maximum value of the increment process associated with the coefficients of the underlying system. One consequence of this result is that the long-term time average of the squared a posteriori estimation error can be made arbitrarily close to its minimum possible value when the underlying system is time-invariant

    A stable adaptive Hammerstein filter employing partial orthogonalization of the input signals

    Get PDF
    Journal ArticleAbstract-This paper presents an algorithm that adapts the parameters of a Hammerstein system model. Hammerstein systems are nonlinear systems that contain a static nonlinearity cascaded with a linear system. In this paper, the static nonlinearity is modeled using a polynomial system, and the linear filter that follows the nonlinearity is an infinite-impulse response (IIR) system. The adaptation of the nonlinear components is improved by orthogonalizing the inputs to the coefficients of the polynomial system. The step sizes associated with the recursive components are constrained in such a way as to guarantee bounded-input bounded-output (BIBO) stability of the overall system. This paper also presents experimental results that show that the algorithm performs well in a variety of operating environments, exhibiting stability and global convergence of the algorithm

    Stochastic Behavior Analysis of the Gaussian Kernel Least-Mean-Square Algorithm

    Get PDF
    The kernel least-mean-square (KLMS) algorithm is a popular algorithm in nonlinear adaptive filtering due to its simplicity and robustness. In kernel adaptive filters, the statistics of the input to the linear filter depends on the parameters of the kernel employed. Moreover, practical implementations require a finite nonlinearity model order. A Gaussian KLMS has two design parameters, the step size and the Gaussian kernel bandwidth. Thus, its design requires analytical models for the algorithm behavior as a function of these two parameters. This paper studies the steady-state behavior and the transient behavior of the Gaussian KLMS algorithm for Gaussian inputs and a finite order nonlinearity model. In particular, we derive recursive expressions for the mean-weight-error vector and the mean-square-error. The model predictions show excellent agreement with Monte Carlo simulations in transient and steady state. This allows the explicit analytical determination of stability limits, and gives opportunity to choose the algorithm parameters a priori in order to achieve prescribed convergence speed and quality of the estimate. Design examples are presented which validate the theoretical analysis and illustrates its application

    Enhanced Nonlinear System Identification by Interpolating Low-Rank Tensors

    Full text link
    Function approximation from input and output data is one of the most investigated problems in signal processing. This problem has been tackled with various signal processing and machine learning methods. Although tensors have a rich history upon numerous disciplines, tensor-based estimation has recently become of particular interest in system identification. In this paper we focus on the problem of adaptive nonlinear system identification solved with interpolated tensor methods. We introduce three novel approaches where we combine the existing tensor-based estimation techniques with multidimensional linear interpolation. To keep the reduced complexity, we stick to the concept where the algorithms employ a Wiener or Hammerstein structure and the tensors are combined with the well-known LMS algorithm. The update of the tensor is based on a stochastic gradient decent concept. Moreover, an appropriate step size normalization for the update of the tensors and the LMS supports the convergence. Finally, in several experiments we show that the proposed algorithms almost always clearly outperform the state-of-the-art methods with lower or comparable complexity.Comment: 12 pages, 4 figures, 3 table

    On evolutionary system identification with applications to nonlinear benchmarks

    Get PDF
    This paper presents a record of the participation of the authors in a workshop on nonlinear system identification held in 2016. It provides a summary of a keynote lecture by one of the authors and also gives an account of how the authors developed identification strategies and methods for a number of benchmark nonlinear systems presented as challenges, before and during the workshop. It is argued here that more general frameworks are now emerging for nonlinear system identification, which are capable of addressing substantial ranges of problems. One of these frameworks is based on evolutionary optimisation (EO); it is a framework developed by the authors in previous papers and extended here. As one might expect from the ‘no-free-lunch’ theorem for optimisation, the methodology is not particularly sensitive to the particular (EO) algorithm used, and a number of different variants are presented in this paper, some used for the first time in system identification problems, which show equal capability. In fact, the EO approach advocated in this paper succeeded in finding the best solutions to two of the three benchmark problems which motivated the workshop. The paper provides considerable discussion on the approaches used and makes a number of suggestions regarding best practice; one of the major new opportunities identified here concerns the application of grey-box models which combine the insight of any prior physical-law based models (white box) with the power of machine learners with universal approximation properties (black box)

    Structured Hammerstein-Wiener Model Learning for Model Predictive Control

    Get PDF
    This paper aims to improve the reliability of optimal control using models constructed by machine learning methods. Optimal control problems based on such models are generally non-convex and difficult to solve online. In this paper, we propose a model that combines the Hammerstein-Wiener model with input convex neural networks, which have recently been proposed in the field of machine learning. An important feature of the proposed model is that resulting optimal control problems are effectively solvable exploiting their convexity and partial linearity while retaining flexible modeling ability. The practical usefulness of the method is examined through its application to the modeling and control of an engine airpath system.Comment: 6 pages, 3 figure
    • 

    corecore