626 research outputs found

    From Parameter Estimation to Dispersion of Nonstationary Gauss-Markov Processes

    Get PDF
    This paper provides a precise error analysis for the maximum likelihood estimate aĢ‚ (u) of the parameter a given samples u = (u 1 , ā€¦ , u n )^āŠ¤ drawn from a nonstationary Gauss-Markov process U i = aU iāˆ’1 + Z i , i ā‰„ 1, where a > 1, U 0 = 0, and Z i ā€™s are independent Gaussian random variables with zero mean and variance Ļƒ^2 . We show a tight nonasymptotic exponentially decaying bound on the tail probability of the estimation error. Unlike previous works, our bound is tight already for a sample size of the order of hundreds. We apply the new estimation bound to find the dispersion for lossy compression of nonstationary Gauss-Markov sources. We show that the dispersion is given by the same integral formula derived in our previous work [1] for the (asymptotically) stationary Gauss-Markov sources, i.e., |a| < 1. New ideas in the nonstationary case include a deeper understanding of the scaling of the maximum eigenvalue of the covariance matrix of the source sequence, and new techniques in the derivation of our estimation error bound

    From Parameter Estimation to Dispersion of Nonstationary Gauss-Markov Processes

    Get PDF
    This paper provides a precise error analysis for the maximum likelihood estimate aĢ‚ (u) of the parameter a given samples u = (u 1 , ā€¦ , u n )^āŠ¤ drawn from a nonstationary Gauss-Markov process U i = aU iāˆ’1 + Z i , i ā‰„ 1, where a > 1, U 0 = 0, and Z i ā€™s are independent Gaussian random variables with zero mean and variance Ļƒ^2 . We show a tight nonasymptotic exponentially decaying bound on the tail probability of the estimation error. Unlike previous works, our bound is tight already for a sample size of the order of hundreds. We apply the new estimation bound to find the dispersion for lossy compression of nonstationary Gauss-Markov sources. We show that the dispersion is given by the same integral formula derived in our previous work [1] for the (asymptotically) stationary Gauss-Markov sources, i.e., |a| < 1. New ideas in the nonstationary case include a deeper understanding of the scaling of the maximum eigenvalue of the covariance matrix of the source sequence, and new techniques in the derivation of our estimation error bound

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(iā€“1) + Z_i for i ā‰„ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; Īµ) approaches the rate-distortion function R(d) as R(n; d; Īµ) = R(d)+ āˆš V(d)/n Qā€“1(Īµ)+o(1āˆšn), where V (d) is the dispersion, Īµ Īµ 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ā‰„ Ļƒ^2 (1+|Ļƒ|)^2, R(n; d; Īµ) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    Latent Gaussian modeling and INLA: A review with focus on space-time applications

    Get PDF
    Bayesian hierarchical models with latent Gaussian layers have proven very flexible in capturing complex stochastic behavior and hierarchical structures in high-dimensional spatial and spatio-temporal data. Whereas simulation-based Bayesian inference through Markov Chain Monte Carlo may be hampered by slow convergence and numerical instabilities, the inferential framework of Integrated Nested Laplace Approximation (INLA) is capable to provide accurate and relatively fast analytical approximations to posterior quantities of interest. It heavily relies on the use of Gauss-Markov dependence structures to avoid the numerical bottleneck of high-dimensional nonsparse matrix computations. With a view towards space-time applications, we here review the principal theoretical concepts, model classes and inference tools within the INLA framework. Important elements to construct space-time models are certain spatial Mat\'ern-like Gauss-Markov random fields, obtained as approximate solutions to a stochastic partial differential equation. Efficient implementation of statistical inference tools for a large variety of models is available through the INLA package of the R software. To showcase the practical use of R-INLA and to illustrate its principal commands and syntax, a comprehensive simulation experiment is presented using simulated non Gaussian space-time count data with a first-order autoregressive dependence structure in time

    Recursive Estimation in Econometrics

    Get PDF
    An account is given of recursive regression and of Kalman filtering which gathers the important results and the ideas that lie behind them within a small compass. It emphasises the areas in which econometricians have made contributions, which include the methods for handling the initial-value problem associated with nonstationary processes and the algorithms of fixed-interval smoothing.Recursive regression, Kalman filtering, Fixed-interval smoothing, The initial-value problem

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(iā€“1) + Z_i for i ā‰„ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; Īµ) approaches the rate-distortion function R(d) as R(n; d; Īµ) = R(d)+ āˆš V(d)/n Qā€“1(Īµ)+o(1āˆšn), where V (d) is the dispersion, Īµ Īµ 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ā‰„ Ļƒ^2 (1+|Ļƒ|)^2, R(n; d; Īµ) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i=aU_(i-1)+ Z_i for i ā‰„ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n, d, Īµ) approaches the rate-distortion function R(d) as R(n, d, Īµ) = R(d)+āˆš{[V(d)/n]}Q^(-1)(Īµ)+o([1/(āˆšn)]), where V(d) is the dispersion, Īµ āˆˆ (0,1) is the excess-distortion probability, and Q^(-1) is the inverse of the Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 <; d ā‰¤ Ļƒ2/(1+|a|)^2 ,R(n, d, c) of the Gauss-Markov source coincides with that of Zi, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    Empirical testing for bubbles during the inter-war European hyperinflations

    Get PDF
    In this thesis, I undertake an empirical search for the existence of price and exchange rate bubbles during the inter-war European hyperinflations of Germany, Hungary and Poland. Since the choice of an appropriate policy to control inflation depends upon the true nature of the underlying process generating the inflation, the existence or non-existence of inflationary bubbles has important policy implications. If bubbles do exist, positive action will be required to counter the public's self-fulfilling expectation of a price surge. Hyperinflationary episodes have been chosen as my case study because of the dominant role that such expectations play in price determination. In the literature, there are frequently expressed concerns about empirical research into bubbles. The existence of model misspecification and the nonlinear dynamics in the fundamentals under conditions of regime switching may lead to spurious conclusions concerning the existence of bubbles. Furthermore, some stochastic bubbles may display different collapsing properties and consequently appear to be linearly stationary. Thus, the evidence against the existence of bubbles may not be reliable. In my thesis, I attempt to tackle the above empirical problems of testing for the existence of bubbles using advances in testing procedures and methodologies. Since the number of bubble solutions is infinite in the rational expectations framework, I adopt indirect tests, rather than direct tests, for the empirical study. From the findings of my empirical research, the evidence for stationary specification errors and the nonlinearity of the data series cannot be rejected, but the evidence for the existence of price and exchange rate bubbles is rejected for all the countries under study. It leads to the conclusion that the control of the inter-war European hyperinflations was attributable to control of the fundamental processes, since the dynamics of prices and exchange rates for these countries might not be driven by self-fulfilling expectations
    • ā€¦
    corecore