13,452 research outputs found

    Comparison of two methods in estimating standard error of simulated moments estimators for generalized linear mixed models

    Get PDF
    We consider standard error of the method of simulated moment (MSM) estimator for generalized linear mixed models (GLMM). Parametric bootstrap (PB) has been used to estimate the covariance matrix, in which we use the estimates to generate the simulated moments. To avoid the bias introduced by estimating the parameters and to deal with the correlated observations, (Lu, 2012) proposed a multi-stage block nonparametric bootstrap to estimate the standard errors. In this research, we compare PB and nonparametric bootstrap methods (NPB) in estimating the standard errors of MSM estimators for GLMM. Simulation results show that when the group size is large, NPB and PB perform similarly; when group size is medium, NPB performs better than PB in estimating the mean. A data application is considered to illustrate the methods discussed in this paper, using productivity of plantation roses. The data application finds that the person caring for the roses is associated with the productivity of those beds. Furthermore, we did an initial study in applying random forests to predict the productivity of the rose beds

    INFORMATION THEORETIC ALTERNATIVES TO TRADITIONAL SIMULTANEOUS EQUATIONS ESTIMATORS IN THE PRESENCE OF HETEROSKEDASTICITY

    Get PDF
    Finite sampling properties of information theoretic estimators of the simultaneous equations model, including maximum empirical likelihood, maximum empirical exponential likelihood, and maximum log Euclidean likelihood, are examined in the presence of selected forms of heteroskedasticity. Extensive Monte Carlo experiments are used to compare finite sample performance of Wald, Likelihood ratio, and Lagrangian multiplier tests constructed from information theoretic estimators to those from traditional generalized method of moments.Research Methods/ Statistical Methods,

    Higher Order Improvements for Approximate Estimators

    Get PDF
    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer degree of approximation. The NR step removes some or all of the additional bias and variance of the initial approximate estimator. A Monte Carlo simulation on the mixed logit model shows that noticeable improvements can be obtained rather cheaply.
    corecore