38 research outputs found

    Initializing Wiener-Hammerstein Models Based on Partitioning of the Best Linear Approximation

    Get PDF
    This paper describes a new algorithm for initializing and estimating Wiener- Hammerstein models. The algorithm makes use of the best linear model of the system which is split in all possible ways into two linear sub-models. For all possible splits, a Wiener- Hammerstein model is initialized which means that a nonlinearity is introduced in between the two sub-models. The linear parameters of this nonlinearity can be estimated using leastsquares. All initialized models can then be ranked with respect to their fit. Typically, one is only interested in the best one, for which all parameters are fitted using prediction error minimization. The paper explains the algorithm and the consistency of the initialization is stated. Computational aspects are investigated, showing that in most realistic cases, the number of splits of the initial linear model remains low enough to make the algorithm useful. The algorithm is illustrated on an example where it is shown that the initialization is a tool to avoid many local minima

    From Nonlinear Identification to Linear Parameter Varying Models: Benchmark Examples

    Full text link
    Linear parameter-varying (LPV) models form a powerful model class to analyze and control a (nonlinear) system of interest. Identifying a LPV model of a nonlinear system can be challenging due to the difficulty of selecting the scheduling variable(s) a priori, which is quite challenging in case a first principles based understanding of the system is unavailable. This paper presents a systematic LPV embedding approach starting from nonlinear fractional representation models. A nonlinear system is identified first using a nonlinear block-oriented linear fractional representation (LFR) model. This nonlinear LFR model class is embedded into the LPV model class by factorization of the static nonlinear block present in the model. As a result of the factorization a LPV-LFR or a LPV state-space model with an affine dependency results. This approach facilitates the selection of the scheduling variable from a data-driven perspective. Furthermore the estimation is not affected by measurement noise on the scheduling variables, which is often left untreated by LPV model identification methods. The proposed approach is illustrated on two well-established nonlinear modeling benchmark examples

    Identification of Stochastic Wiener Systems using Indirect Inference

    Full text link
    We study identification of stochastic Wiener dynamic systems using so-called indirect inference. The main idea is to first fit an auxiliary model to the observed data and then in a second step, often by simulation, fit a more structured model to the estimated auxiliary model. This two-step procedure can be used when the direct maximum-likelihood estimate is difficult or intractable to compute. One such example is the identification of stochastic Wiener systems, i.e.,~linear dynamic systems with process noise where the output is measured using a non-linear sensor with additive measurement noise. It is in principle possible to evaluate the log-likelihood cost function using numerical integration, but the corresponding optimization problem can be quite intricate. This motivates studying consistent, but sub-optimal, identification methods for stochastic Wiener systems. We will consider indirect inference using the best linear approximation as an auxiliary model. We show that the key to obtain a reliable estimate is to use uncertainty weighting when fitting the stochastic Wiener model to the auxiliary model estimate. The main technical contribution of this paper is the corresponding asymptotic variance analysis. A numerical evaluation is presented based on a first-order finite impulse response system with a cubic non-linearity, for which certain illustrative analytic properties are derived.Comment: The 17th IFAC Symposium on System Identification, SYSID 2015, Beijing, China, October 19-21, 201

    Fast identification of Wiener-Hammerstein systems using discrete optimisation

    Get PDF
    A fast identification algorithm for Wiener-Hammerstein systems is proposed. The computational cost of separating the front and the back linear time-invariant (LTI) block dynamics is significantly improved by using discrete optimisation. The discrete optimisation is implemented as a genetic algorithm. Numerical results confirm the efficiency and accuracy of the proposed approach

    Consistency aspects of Wiener-Hammerstein model identification in presence of process noise

    Get PDF
    The Wiener-Hammerstein model is a block-oriented model consisting of two linear blocks and a static nonlinearity in the middle. Several identification approaches for this model structure rely on the fact that the best linear approximation of the system is a consistent estimate of the two linear parts, under the hypothesis of Gaussian excitation. But, these approaches do not consider the presence of other disturbance sources than measurement noise. In this paper we consider the presence of a disturbance entering before the nonlinearity (process noise) and we show that, also in this case, the best linear approximation is a consistent estimate of underlying linear dynamics. Furthermore, we analyse the impact of the process noise on the nonlinearity estimation, showing that a standard prediction error method approach can lead to biased results

    Inference techniques for stochastic nonlinear system identification with application to the Wiener-Hammerstein models

    Get PDF
    Stochastic nonlinear systems are a specific class of nonlinear systems where unknown disturbances affect the system\u27s output through a nonlinear transformation. In general, the identification of parametric models for this kind of systems can be very challenging. A main statistical inference technique for parameter estimation is the Maximum Likelihood estimator. The central object of this technique is the likelihood function, i.e. a mathematical expression describing the probability of obtaining certain observations for given values of the parameter. For many stochastic nonlinear systems, however, the likelihood function is not available in closed-form. Several methods have been developed to obtain approximate solutions to the Maximum Likelihood problem, mainly based on the Monte Carlo method. However, one of the main difficulties of these methods is that they can be computationally expensive, especially when they are combined with numerical optimization techniques for likelihood maximisation.This thesis can be divided in three parts. In the first part, a background on the main statistical techniques for parameter estimation is presented. In particular, two iterative methods for finding the Maximum Likelihood estimator are introduced. They are the gradient-based and the Expectation-Maximisation algorithms.In the second part, the main Monte Carlo methods for approximating the Maximum Likelihood problem are analysed. Their combinations with gradient-based and Expectation-Maximisation algorithms is considered. For ensuring convergence, these algorithms require the use of enormous Monte Carlo effort, i.e. the number of random samples used to build the Monte Carlo estimates. In order to reduce this effort and make the algorithms usable in practice, iterative solutions solutions alternating \emph{local} Monte Carlo approximations and maximisation steps are derived. In particular, a procedure implementing an efficient samples simulation across the steps of a Newton\u27s method is developed. The procedure is based on the sensitivity of the parameter search with respect to the Monte Carlo samples and it results into an accurate and fast algorithm for solving the MLE problem.The considered Maximum Likelihood estimation methods proceed through local explorations of the parameter space. Hence, they have guaranteed convergence only to a local optimizer of the likelihood function. In the third part of the thesis, this issue is addressed by deriving initialization algorithms. The purpose is to generate initial guesses that increase the chances of converging to the global maximum. In particular, initialization algorithms are derived for the Wiener-Hammerstein model, i.e. a nonlinear model where a static nonlinearity is sandwiched between two linear parts. For this type of model, it can be proved that the best linear approximation of the system provides a consistent estimates of the two linear parts. This estimate is then used to initialize a Maximum Likelihood Estimation problem in all model parameters

    Identification of continuous-time model of hammerstein system using modified multi-verse optimizer

    Get PDF
    his thesis implements a novel nature-inspired metaheuristic optimization algorithm, namely the modified Multi-Verse Optimizer (mMVO) algorithm, to identify the continuous-time model of Hammerstein system. Multi-Verse Optimizer (MVO) is one of the most recent robust nature-inspired metaheuristic algorithm. It has been successfully implemented and used in various areas such as machine learning applications, engineering applications, network applications, parameter control, and other similar applications to solve optimization problems. However, such metaheuristics had some limitations, such as local optima problem, low searching capability and imbalance between exploration and exploitation. By considering these limitations, two modifications were made upon the conventional MVO in our proposed mMVO algorithm. Our first modification was an average design parameter updating mechanism to solve the local optima issue of the traditional MVO. The essential feature of the average design parameter updating mechanism is that it helps any trapped design parameter jump out from the local optima region and continue a new search track. The second modification is the hybridization of MVO with the Sine Cosine Algorithm (SCA) to improve the low searching capability of the conventional MVO. Hybridization aims to combine MVO and SCA algorithms advantages and minimize the disadvantages, such as low searching capability and imbalance between exploration and exploitation. In particular, the search capacity of the MVO algorithm has been improved using the sine and cosine functions of the Sine Cosine Algorithm (SCA) that will be able to balance the processes of exploration and exploitation. The mMVO based method is then used for identifying the parameters of linear and nonlinear subsystems in the Hammerstein model using the given input and output data. Note that the structure of the linear and nonlinear subsystems is assumed to be known. Moreover, a continuous-time linear subsystem is considered in this study, while there are a few methods that utilize such models. Two numerical examples and one real-world application, such as the Twin Rotor System (TRS) are used to illustrate the efficiency of the mMVO-based method. Various nonlinear subsystems such as quadratic and hyperbolic functions (sine and tangent) are used in those experiments. Numerical and experimental results are analyzed to focus on the convergence curve of the fitness function, the parameter variation index, frequency and time domain response and the Wilcoxon rank test. For the numerical identifications, three different levels of white noise variances were taken. The statistical analysis value (mean) was taken from the parameter deviation index to see how much our proposed algorithm has improved. For Example 1, the improvements are 29%, 33.15% and 36.68%, and for the noise variances, 0.01, 0.25, and 1.0 improvements can be found. For Example 2, the improvements are 39.36%, 39.61% and 66.18%, and for noise variances, the improvements are by 0.01, 0.25 and 1.0, respectively. Finally, for the real TRS application, the improvement is 7%. The numerical and experimental results also showed that both Hammerstein model subsystems are defined effectively using the mMVO-based method, particularly in quadratic output estimation error and a differentiation parameter index. The results further confirmed that the proposed mMVObased method provided better solutions than other optimization techniques, such as PSO, GWO, ALO, MVO and SCA
    corecore