749 research outputs found

    A new kernel-based approach for overparameterized Hammerstein system identification

    Full text link
    In this paper we propose a new identification scheme for Hammerstein systems, which are dynamic systems consisting of a static nonlinearity and a linear time-invariant dynamic system in cascade. We assume that the nonlinear function can be described as a linear combination of pp basis functions. We reconstruct the pp coefficients of the nonlinearity together with the first nn samples of the impulse response of the linear system by estimating an npnp-dimensional overparameterized vector, which contains all the combinations of the unknown variables. To avoid high variance in these estimates, we adopt a regularized kernel-based approach and, in particular, we introduce a new kernel tailored for Hammerstein system identification. We show that the resulting scheme provides an estimate of the overparameterized vector that can be uniquely decomposed as the combination of an impulse response and pp coefficients of the static nonlinearity. We also show, through several numerical experiments, that the proposed method compares very favorably with two standard methods for Hammerstein system identification.Comment: 17 pages, submitted to IEEE Conference on Decision and Control 201

    Initializing Wiener-Hammerstein Models Based on Partitioning of the Best Linear Approximation

    Get PDF
    This paper describes a new algorithm for initializing and estimating Wiener- Hammerstein models. The algorithm makes use of the best linear model of the system which is split in all possible ways into two linear sub-models. For all possible splits, a Wiener- Hammerstein model is initialized which means that a nonlinearity is introduced in between the two sub-models. The linear parameters of this nonlinearity can be estimated using leastsquares. All initialized models can then be ranked with respect to their fit. Typically, one is only interested in the best one, for which all parameters are fitted using prediction error minimization. The paper explains the algorithm and the consistency of the initialization is stated. Computational aspects are investigated, showing that in most realistic cases, the number of splits of the initial linear model remains low enough to make the algorithm useful. The algorithm is illustrated on an example where it is shown that the initialization is a tool to avoid many local minima
    corecore