3,999 research outputs found

    Least Squares Based and Two-Stage Least Squares Based Iterative Estimation Algorithms for H-FIR-MA Systems

    Get PDF
    This paper studies the identification of Hammerstein finite impulse response moving average (H-FIR-MA for short) systems. A new two-stage least squares iterative algorithm is developed to identify the parameters of the H-FIR-MA systems. The simulation cases indicate the efficiency of the proposed algorithms

    Least squares-based iterative identification methods for linear-in-parameters systems using the decomposition technique

    Get PDF
    By extending the least squares-based iterative (LSI) method, this paper presents a decomposition-based LSI (D-LSI) algorithm for identifying linear-in-parameters systems and an interval-varying D-LSI algorithm for handling the identification problems of missing-data systems. The basic idea is to apply the hierarchical identification principle to decompose the original system into two fictitious sub-systems and then to derive new iterative algorithms to estimate the parameters of each sub-system. Compared with the LSI algorithm and the interval-varying LSI algorithm, the decomposition-based iterative algorithms have less computational load. The numerical simulation results demonstrate that the proposed algorithms work quite well

    Combined state and parameter estimation for Hammerstein systems with time-delay using the Kalman filtering

    Get PDF
    This paper discusses the state and parameter estimation problem for a class of Hammerstein state space systems with time-delay. Both the process noise and the measurement noise are considered in the system. Based on the observable canonical state space form and the key term separation, a pseudo-linear regressive identification model is obtained. For the unknown states in the information vector, the Kalman filter is used to search for the optimal state estimates. A Kalman-filter based least squares iterative and a recursive least squares algorithms are proposed. Extending the information vector to include the latest information terms which are missed for the time-delay, the Kalman-filter based recursive extended least squares algorithm is derived to obtain the estimates of the unknown time-delay, parameters and states. The numerical simulation results are given to illustrate the effectiveness of the proposed algorithms

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Data filtering-based least squares iterative algorithm for Hammerstein nonlinear systems by using the model decomposition

    Get PDF
    This paper focuses on the iterative identification problems for a class of Hammerstein nonlinear systems. By decomposing the system into two fictitious subsystems, a decomposition-based least squares iterative algorithm is presented for estimating the parameter vector in each subsystem. Moreover, a data filtering-based decomposition least squares iterative algorithm is proposed. The simulation results indicate that the data filtering-based least squares iterative algorithm can generate more accurate parameter estimates than the least squares iterative algorithm

    Adaptive filtering-based multi-innovation gradient algorithm for input nonlinear systems with autoregressive noise

    Get PDF
    In this paper, by means of the adaptive filtering technique and the multi-innovation identification theory, an adaptive filtering-based multi-innovation stochastic gradient identification algorithm is derived for Hammerstein nonlinear systems with colored noise. The new adaptive filtering configuration consists of a noise whitening filter and a parameter estimator. The simulation results show that the proposed algorithm has higher parameter estimation accuracies and faster convergence rates than the multi-innovation stochastic gradient algorithm for the same innovation length. As the innovation length increases, the filtering-based multi-innovation stochastic gradient algorithm gives smaller parameter estimation errors than the recursive least squares algorithm

    Hierarchical gradient- and least squares-based iterative algorithms for input nonlinear output-error systems using the key term separation

    Get PDF
    This paper considers the parameter identification problems of the input nonlinear output-error (IN-OE) systems, that is the Hammerstein output-error systems. In order to overcome the excessive calculation amount of the over-parameterization method of the IN-OE systems. Through applying the hierarchial identification principle and decomposing the IN-OE system into three subsystems with a smaller number of parameters, we present the key term separation auxiliary model hierarchical gradient-based iterative algorithm and the key term separation auxiliary model hierarchical least squares-based iterative algorithm, which are called the key term separation auxiliary model three-stage gradient-based iterative algorithm and the key term separation auxiliary model three-stage least squares-based iterative algorithm. The comparison of the calculation amount and the simulation analysis indicate that the proposed algorithms are effective. (c) 2021 The Franklin Institute. Published by Elsevier Ltd. All rights reserved

    Parameter and State Estimator for State Space Models

    Get PDF
    This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective

    Two Identification Methods for Dual-Rate Sampled-Data Nonlinear Output-Error Systems

    Get PDF
    This paper presents two methods for dual-rate sampled-data nonlinear output-error systems. One method is the missing output estimation based stochastic gradient identification algorithm and the other method is the auxiliary model based stochastic gradient identification algorithm. Different from the polynomial transformation based identification methods, the two methods in this paper can estimate the unknown parameters directly. A numerical example is provided to confirm the effectiveness of the proposed methods
    corecore