4,397 research outputs found
Regression-Based Methods for Using Control and Antithetic Variates in Monte Carlo Experiments
Methods based on linear regression provide a very easy way to use the information in control and antithetic variates to improve the efficiency with which certain features of the distributions of estimators and test statistics are estimated in Monte Carlo experiments. We propose a new technique that allows these methods to be used when the quantities of interest are quantiles. Ways to obtain approximately optimal control variates in many cases of interest are also proposed. These methods seem to work well in practice, and can greatly reduce the number of replications required to obtain a given level of accuracy.
General Semiparametric Shared Frailty Model Estimation and Simulation with frailtySurv
The R package frailtySurv for simulating and fitting semi-parametric shared
frailty models is introduced. Package frailtySurv implements semi-parametric
consistent estimators for a variety of frailty distributions, including gamma,
log-normal, inverse Gaussian and power variance function, and provides
consistent estimators of the standard errors of the parameters' estimators. The
parameters' estimators are asymptotically normally distributed, and therefore
statistical inference based on the results of this package, such as hypothesis
testing and confidence intervals, can be performed using the normal
distribution. Extensive simulations demonstrate the flexibility and correct
implementation of the estimator. Two case studies performed with publicly
available datasets demonstrate applicability of the package. In the Diabetic
Retinopathy Study, the onset of blindness is clustered by patient, and in a
large hard drive failure dataset, failure times are thought to be clustered by
the hard drive manufacturer and model
The effects of estimation of censoring, truncation, transformation and partial data vectors
The purpose of this research was to attack statistical problems concerning the estimation of distributions for purposes of predicting and measuring assembly performance as it appears in biological and physical situations. Various statistical procedures were proposed to attack problems of this sort, that is, to produce the statistical distributions of the outcomes of biological and physical situations which, employ characteristics measured on constituent parts. The techniques are described
Selection Procedures for Order Statistics in Empirical Economic Studies
In a presentation to the American Economics Association, McCloskey (1998) argued that "statistical significance is bankrupt" and that economists' time would be "better spent on finding out How Big Is Big". This brief survey is devoted to methods of determining "How Big Is Big". It is concerned with a rich body of literature called selection procedures, which are statistical methods that allow inference on order statistics and which enable empiricists to attach confidence levels to statements about the relative magnitudes of population parameters (i.e. How Big Is Big). Despite their prolonged existence and common use in other fields, selection procedures have gone relatively unnoticed in the field of economics, and, perhaps, their use is long overdue. The purpose of this paper is to provide a brief survey of selection procedures as an introduction to economists and econometricians and to illustrate their use in economics by discussing a few potential applications. Both simulated and empirical examples are provided.Ranking and selection, multiple comparisons, hypothesis testing
On the Efficient Use of the Informational Content of Estimating Equations: Implied Probabilities and Euclidean Empirical Likelihood
A number of information-theoretic alternatives to GMM have recently been proposed in the literature. For practical use and general interpretation, the main drawback of these alternatives, particularly in the case of conditional moment restrictions, is that they rely on high dimensional convex optimization programs. The main contribution of this paper is to analyze the informational content of estimating equations within the unified framework of least squares projections. Improved inference by control variables, shrinkage of implied probabilities and information-theoretic interpretations of continuously updated GMM are discussed in the two cases of unconditional and conditional moment restrictions. Plusieurs méthodes alternatives à GMM basées sur un critère d'information ont récemment été proposées. Pour leur utilisation pratique et leur interprétation, le principal défaut de ces alternatives, particulièrement dans le cas de restrictions de moments conditionnels, est de faire appel à des programmes d'optimisation convexe de très grande dimension. La contribution principale de cet article est d'analyser le contenu informatif d'équations estimantes dans le cadre unifié de projections de moindres carrés. L'amélioration de l'inférence par variables de contrôle, le calcul des probabilités impliquées et les interprétations informationnelles des différentes versions de GMM sont discutés dans les deux cadres de moments conditionnels et inconditionnels.empirical likelihood, continuously updated GMM, information, control variables, semiparametric efficiency, higher order asymptotics, minimum chi-square, vraisemblance empirique, GMM avec révision continue, information, variables de contrôle, efficacité semi-paramétrique, théorie asymptotique à l'ordre supérieur, chi-deux minimum
- …