2,219 research outputs found

    Kernel-based system identification from noisy and incomplete input-output data

    Full text link
    In this contribution, we propose a kernel-based method for the identification of linear systems from noisy and incomplete input-output datasets. We model the impulse response of the system as a Gaussian process whose covariance matrix is given by the recently introduced stable spline kernel. We adopt an empirical Bayes approach to estimate the posterior distribution of the impulse response given the data. The noiseless and missing data samples, together with the kernel hyperparameters, are estimated maximizing the joint marginal likelihood of the input and output measurements. To compute the marginal-likelihood maximizer, we build a solution scheme based on the Expectation-Maximization method. Simulations on a benchmark dataset show the effectiveness of the method.Comment: 16 pages, submitted to IEEE Conference on Decision and Control 201

    Punctuated vortex coalescence and discrete scale invariance in two-dimensional turbulence

    Full text link
    We present experimental evidence and theoretical arguments showing that the time-evolution of freely decaying 2-d turbulence is governed by a {\it discrete} time scale invariance rather than a continuous time scale invariance. Physically, this reflects that the time-evolution of the merging of vortices is not smooth but punctuated, leading to a prefered scale factor and as a consequence to log-periodic oscillations. From a thorough analysis of freely decaying 2-d turbulence experiments, we show that the number of vortices, their radius and separation display log-periodic oscillations as a function of time with an average log-frequency of ~ 4-5 corresponding to a prefered scaling ratio of ~ 1.2-1.3Comment: 22 pages and 38 figures. Submitted to Physica

    System Identification Based on Errors-In-Variables System Models

    Get PDF
    We study the identification problem for errors-in-variables (EIV) systems. Such an EIV model assumes that the measurement data at both input and output of the system involve corrupting noises. The least square (LS) algorithm has been widely used in this area. However, it results in biased estimates for the EIV-based system identification. In contrast, the total least squares (TLS) algorithm is unbiased, which is now well-known, and has been effective for estimating the system parameters in the EIV system identification. In this dissertation, we first show that the TLS algorithm computes the approximate maximum likelihood estimate (MLE) of the system parameters and that the approximation error converges to zero asymptotically as the number of measurement data approaches infinity. Then we propose a graph subspace approach (GSA) to tackle the same EIV-based system identification problem and derive a new estimation algorithm that is more general than the TLS algorithm. Several numerical examples are worked out to illustrate our proposed estimation algorithm for the EIV-based system identification. We also study the problem of the EIV system identification without assuming equal noise variances at the system input and output. Firstly, we review the Frisch scheme, which is a well-known method for estimating the noise variances. Then we propose a new method which is GSA in combination with the Frisch scheme (GSA-Frisch) algorithm via estimating the ratio of the noise variances and the system parameters iteratively. Finally, a new identification algorithm is proposed to estimate the system parameters based on the subspace interpretation without estimating noise variances or the ratio. This new algorithm is unbiased, and achieves the consistency of the parameter estimates. Moreover, it is low in complexity. The performance of the identification algorithm is examined by several numerical examples, and compared to the N4SID algorithm that has the Matlab codes available in Matlab toolboxes, and also to the GSA-Frisch algorithm

    A Convex Approach to Frisch-Kalman Problem

    Full text link
    This paper proposes a convex approach to the Frisch-Kalman problem that identifies the linear relations among variables from noisy observations. The problem was proposed by Ragnar Frisch in 1930s, and was promoted and further developed by Rudolf Kalman later in 1980s. It is essentially a rank minimization problem with convex constraints. Regarding this problem, analytical results and heuristic methods have been pursued over a half century. The proposed convex method in this paper is shown to be accurate and demonstrated to outperform several commonly adopted heuristics when the noise components are relatively small compared with the underlying data

    Reconstructing cosmological initial conditions from galaxy peculiar velocities. I. Reverse Zeldovich Approximation

    Full text link
    We propose a new method to recover the cosmological initial conditions of the presently observed galaxy distribution, which can serve to run constrained simulations of the Local Universe. Our method, the Reverse Zeldovich Approximation (RZA), can be applied to radial galaxy peculiar velocity data and extends the previously used Constrained Realizations (CR) method by adding a Lagrangian reconstruction step. The RZA method consists of applying the Zeldovich approximation in reverse to galaxy peculiar velocities to estimate the cosmic displacement field and the initial linear matter distribution from which the present-day Local Universe evolved.We test our method with a mock survey taken from a cosmological simulation. We show that the halo peculiar velocities at z = 0 are close to the linear prediction of the Zeldovich approximation, if a grouping is applied to the data to remove virial motions. We find that the addition of RZA to the CR method significantly improves the reconstruction of the initial conditions. The RZA is able to recover the correct initial positions of the velocity tracers with a median error of only 1.36 Mpc/h in our test simulation. For realistic sparse and noisy data, this median increases to 5 Mpc/h. This is a significant improvement over the previous approach of neglecting the displacement field, which introduces errors on a scale of 10 Mpc/h or even higher. Applying the RZA method to the upcoming high-quality observational peculiar velocity catalogues will generate much more precise constrained simulations of the Local Universe.Comment: Accepted for MNRAS 2012 December 1

    Reply to E.T. Jaynes' and A. Zellner's comments on my two articles

    Get PDF
    • 

    corecore