25 research outputs found

    Bias Correction of ML and QML Estimators in the EGARCH(1,1) Model

    Get PDF
    n this paper we derive the bias approximations of the Maximum Likelihood (ML) and Quasi-Maximum Likelihood (QML) Estimators of the EGARCH(1,1) parameters and we check our theoretical results through simulations. With the approximate bias expressions up to O(1/T), we are then able to correct the bias of all estimators. To this end, a Monte Carlo exercise is conducted and the results are presented and discussed. We conclude that, for given sets of parameters values, the bias correction works satisfactory for all parameters. The results for the bias expressions can be used in order to formulate the approximate Edgeworth distribution of the estimators.

    Edgeworth and Moment Approximations: The Case of MM and QML Estimators for the MA(1) Models

    Get PDF
    Extending the results in Sargan (1976) and Tanaka (1984), we derive the asymptotic expansions, of the Edgeworth and Nagar type, of the MM and QML estimators of the 1^{st} order autocorrelation and the MA parameter for the MA(1) model. It turns out that the asymptotic properties of the estimators depend on whether the mean of the process is known or estimated. A comparison of the Nagar expansions, either in terms of bias or MSE, reveals that there is not uniform superiority of neither of the estimators, when the mean of the process is estimated. This is also confirmed by simulations. In the zero-mean case, and on theoretical grounds, the QMLEs are superior to the MM ones in both bias and MSE terms. The results presented here are important for deciding on the estimation method we choose, as well as for bias reduction and increasing the efficiency of the estimators.Edgeworth expansion, moving average process, method of moments, Quasi Maximum Likelihood, autocorrelation, asymptotic properties.

    Stochastic Expansions and Moment Approximations for Three Indirect Estimators

    Get PDF
    This paper deals with properties of three indirect estimators that are known to be (first order) asymptotically equivalent. Specifically, we examine a) the issue of validity of the formal Edgeworth expansion of an arbitrary order. b) Given a), we are concerned with valid moment approximations and employ them to characterize the second order bias structure of the estimators. Our motivation resides on the fact that one of the three is reported by the relevant literature to be second order unbiased. However, this result was derived without any establishment of validity. We provide this establishment, but we are also able to massively generalize the conditions under which this second order property remains true. In this way, we essentially prove their higher order inequivalence. We generalize indirect estimators by introducing recursive ones, emerging from multistep optimization procedures. We are able to establish higher order unbiaseness for estimators of this sort.Asymptotic Approximation, Second Order Bias Structure, Binding Function, Local Canonical Representation, Convex Variational Distance, Recursive Indirect Estimators, Higher order Bias.

    The Autocorrelation Function of Conditionally Heteroskedastic

    Get PDF
    Abstract This paper discusses statistical properties of conditionally heteroskedastic in mean models. We derive the autocovariance function of an observed series under the assumption that the conditional variance follows a flexible parameterization, which nests a variety of specifications, including models for the variance, the standard deviation or their logarithm. Furthermore, the mean parameter can be timevarying. We also present the autocovariance function of the squared residuals. Our result can be applied so that the properties of the observed data can be compared with the theoretical properties of the models, thus facilitating model identification

    Edgeworth and Moment Approximations: The Case of MM and QML Estimators for the MA (1) Models

    Get PDF
    Extending the results in Sargan (1976) and Tanaka (1984), we derive the asymptotic expansions, of the Edgeworth and Nagar type, of the MM and QML estimators of the 1^{st} order autocorrelation and the MA parameter for the MA(1) model. It turns out that the asymptotic properties of the estimators depend on whether the mean of the process is known or estimated. A comparison of the Nagar expansions, either in terms of bias or MSE, reveals that there is not uniform superiority of neither of the estimators, when the mean of the process is estimated. This is also confirmed by simulations. In the zero-mean case, and on theoretical grounds, the QMLEs are superior to the MM ones in both bias and MSE terms. The results presented here are important for deciding on the estimation method we choose, as well as for bias reduction and increasing the efficiency of the estimators

    Stochastic Expansions and Moment Approximations for Three Indirect Estimators

    Get PDF
    This paper deals with properties of three indirect estimators that are known to be (first order) asymptotically equivalent. Specifically, we examine a) the issue of validity of the formal Edgeworth expansion of an arbitrary order. b) Given a), we are concerned with valid moment approximations and employ them to characterize the second order bias structure of the estimators. Our motivation resides on the fact that one of the three is reported by the relevant literature to be second order unbiased. However, this result was derived without any establishment of validity. We provide this establishment, but we are also able to massively generalize the conditions under which this second order property remains true. In this way, we essentially prove their higher order inequivalence. We generalize indirect estimators by introducing recursive ones, emerging from multistep optimization procedures. We are able to establish higher order unbiaseness for estimators of this sort

    Finite sample theory and bias correction of maximum likelihood estimators in the EGARCH model

    No full text
    We derive analytical expressions of bias approximations for maximum likelihood (ML) and quasi-maximum likelihood (QML) estimators of the EGARCH(1;1) parameters that enable us to correct after the bias of all estimators. The bias correction mechanism is constructed under the specification of two methods that are analytically described. We also evaluate the residual bootstrapped estimator as a measure of performance. Monte Carlo simulations indicate that, for given sets of parameters values, the bias corrections work satisfactory for all parameters. The proposed full-step estimator performs better than the classical one and is also faster than the bootstrap. The results can be also used to formulate the approximate Edgeworth distribution of the estimators

    Finite-Sample Theory and Bias Correction of Maximum Likelihood Estimators in the EGARCH Model

    No full text
    We derive the analytical expressions of bias approximations for maximum likelihood (ML) and quasi-maximum likelihood (QML) estimators for the EGARCH (1,1) parameters that enable us to correct after the bias of all estimators. The bias-correction mechanism is constructed under the specification of two methods that are analytically described. We also evaluate the residual bootstrapped estimator as a measure of performance. Monte Carlo simulations indicate that, for given sets of parameters values, the bias corrections work satisfactory for all parameters. The proposed full-step estimator performs better than the classical one and is also faster than the bootstrap. The results can be also used to formulate the approximate Edgeworth distribution of the estimators

    Estimation and Properties of a Time-Varying GQARCH(1,1)-M Model

    Get PDF
    Time-varying GARCH-M models are commonly used in econometrics and financial economics. Yet the recursive nature of the conditional variance makes exact likelihood analysis of these models computationally infeasible. This paper outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only () computational operations, where is the sample size. Furthermore, the theoretical dynamic properties of a time-varying GQARCH(1,1)-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets

    An EM Algorithm for Conditionally Heteroscedastic Factor Models

    Full text link
    corecore