538,324 research outputs found

    Bias Correction of ML and QML Estimators in the EGARCH(1,1) Model

    Get PDF
    n this paper we derive the bias approximations of the Maximum Likelihood (ML) and Quasi-Maximum Likelihood (QML) Estimators of the EGARCH(1,1) parameters and we check our theoretical results through simulations. With the approximate bias expressions up to O(1/T), we are then able to correct the bias of all estimators. To this end, a Monte Carlo exercise is conducted and the results are presented and discussed. We conclude that, for given sets of parameters values, the bias correction works satisfactory for all parameters. The results for the bias expressions can be used in order to formulate the approximate Edgeworth distribution of the estimators.

    Higher Order Bias Correcting Moment Equation for M-Estimation and its Higher Order Efficiency

    Get PDF
    This paper studies an alternative bias correction for the M-estimator, which is obtained by correcting the moment equation in the spirit of Firth (1993). In particular, this paper compares the stochastic expansions of the analytically bias-corrected estimator and the alternative estimator and finds that the third-order stochastic expansions of these two estimators are identical. This implies that at least in terms of the third order stochastic expansion, we cannot improve on the simple one-step bias correction by using the bias correction of moment equations. Though the result in this paper is for a fixed number of parameters, our intuition may extend to the analytical bias correction of the panel data models with individual specific effects. Noting the M-estimation can nest many kinds of estimators including IV, 2SLS, MLE, GMM, and GEL, our finding is a rather strong result.Third-order Stochastic Expansion, Bias Correction, M-estimation

    Higher Order Bias Correcting Moment Equation for M-Estimation and its Higher Order Efficiency

    Get PDF
    This paper studies an alternative bias correction for the M-estimator, which is obtained by correcting the moment equation in the spirit of Firth (1993). In particular, this paper compares the stochastic expansions of the analytically bias-corrected estimator and the alternative estimator and finds that the third-order stochastic expansions of these two estimators are identical. This implies that at least in terms of the third order stochastic expansion, we cannot improve on the simple one-step bias correction by using the bias correction of moment equations. Though the result in this paper is for a .xed number of parameters, our intuition may extend to the analytical bias correction of the panel data models with individual speci.c eects. Noting the M-estimation can nest many kinds of estimators including IV, 2SLS, MLE, GMM, and GEL, our .nding is a rather strong result.Third-order Stochastic Expansion, Bias Correction, M-estimation

    Bias correction in multivariate extremes

    Full text link
    The estimation of the extremal dependence structure is spoiled by the impact of the bias, which increases with the number of observations used for the estimation. Already known in the univariate setting, the bias correction procedure is studied in this paper under the multivariate framework. New families of estimators of the stable tail dependence function are obtained. They are asymptotically unbiased versions of the empirical estimator introduced by Huang [Statistics of bivariate extremes (1992) Erasmus Univ.]. Since the new estimators have a regular behavior with respect to the number of observations, it is possible to deduce aggregated versions so that the choice of the threshold is substantially simplified. An extensive simulation study is provided as well as an application on real data.Comment: Published at http://dx.doi.org/10.1214/14-AOS1305 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bias Reduction of Long Memory Parameter Estimators via the Pre-filtered Sieve Bootstrap

    Full text link
    This paper investigates the use of bootstrap-based bias correction of semi-parametric estimators of the long memory parameter in fractionally integrated processes. The re-sampling method involves the application of the sieve bootstrap to data pre-filtered by a preliminary semi-parametric estimate of the long memory parameter. Theoretical justification for using the bootstrap techniques to bias adjust log-periodogram and semi-parametric local Whittle estimators of the memory parameter is provided. Simulation evidence comparing the performance of the bootstrap bias correction with analytical bias correction techniques is also presented. The bootstrap method is shown to produce notable bias reductions, in particular when applied to an estimator for which analytical adjustments have already been used. The empirical coverage of confidence intervals based on the bias-adjusted estimators is very close to the nominal, for a reasonably large sample size, more so than for the comparable analytically adjusted estimators. The precision of inferences (as measured by interval length) is also greater when the bootstrap is used to bias correct rather than analytical adjustments.Comment: 38 page

    The Risk and Return of Venture Capital

    Get PDF
    This paper measures the mean, standard deviation, alpha and beta of venture capital investments, using a maximum likelihood estimate that corrects for selection bias. Since firms go public when they have achieved a good return, estimates that do not correct for selection bias are optimistic. The selection bias correction neatly accounts for log returns. Without a selection bias correction, I find a mean log return of about 100% and a log CAPM intercept of about 90%. With the selection bias correction, I find a mean log return of about 7% with a -2% intercept. However, returns are very volatile, with standard deviation near 100%. Therefore, arithmetic average returns and intercepts are much higher than geometric averages. The selection bias correction attenuates but does not eliminate high arithmetic average returns. Without a selection bias correction, I find an arithmetic average return of around 700% and a CAPM alpha of nearly 500%. With the selection bias correction, I find arithmetic average returns of about 53% and CAPM alpha of about 45%. Second, third, and fourth rounds of financing are less risky. They have progressively lower volatility, and therefore lower arithmetic average returns. The betas of successive rounds also decline dramatically from near 1 for the first round to near zero for fourth rounds. The maximum likelihood estimate matches many features of the data, in particular the pattern of IPO and exit as a function of project age, and the fact that return distributions are stable across horizons.
    • …
    corecore