1,274 research outputs found

    Composite Likelihood Inference by Nonparametric Saddlepoint Tests

    Get PDF
    The class of composite likelihood functions provides a flexible and powerful toolkit to carry out approximate inference for complex statistical models when the full likelihood is either impossible to specify or unfeasible to compute. However, the strenght of the composite likelihood approach is dimmed when considering hypothesis testing about a multidimensional parameter because the finite sample behavior of likelihood ratio, Wald, and score-type test statistics is tied to the Godambe information matrix. Consequently inaccurate estimates of the Godambe information translate in inaccurate p-values. In this paper it is shown how accurate inference can be obtained by using a fully nonparametric saddlepoint test statistic derived from the composite score functions. The proposed statistic is asymptotically chi-square distributed up to a relative error of second order and does not depend on the Godambe information. The validity of the method is demonstrated through simulation studies

    Minimum scoring rule inference

    Full text link
    Proper scoring rules are methods for encouraging honest assessment of probability distributions. Just like likelihood, a proper scoring rule can be applied to supply an unbiased estimating equation for any statistical model, and the theory of such equations can be applied to understand the properties of the associated estimator. In this paper we develop some basic scoring rule estimation theory, and explore robustness and interval estimation properties by means of theory and simulations.Comment: 27 pages, 3 figure

    On the Properties of Simulation-based Estimators in High Dimensions

    Full text link
    Considering the increasing size of available data, the need for statistical methods that control the finite sample bias is growing. This is mainly due to the frequent settings where the number of variables is large and allowed to increase with the sample size bringing standard inferential procedures to incur significant loss in terms of performance. Moreover, the complexity of statistical models is also increasing thereby entailing important computational challenges in constructing new estimators or in implementing classical ones. A trade-off between numerical complexity and statistical properties is often accepted. However, numerically efficient estimators that are altogether unbiased, consistent and asymptotically normal in high dimensional problems would generally be ideal. In this paper, we set a general framework from which such estimators can easily be derived for wide classes of models. This framework is based on the concepts that underlie simulation-based estimation methods such as indirect inference. The approach allows various extensions compared to previous results as it is adapted to possibly inconsistent estimators and is applicable to discrete models and/or models with a large number of parameters. We consider an algorithm, namely the Iterative Bootstrap (IB), to efficiently compute simulation-based estimators by showing its convergence properties. Within this framework we also prove the properties of simulation-based estimators, more specifically the unbiasedness, consistency and asymptotic normality when the number of parameters is allowed to increase with the sample size. Therefore, an important implication of the proposed approach is that it allows to obtain unbiased estimators in finite samples. Finally, we study this approach when applied to three common models, namely logistic regression, negative binomial regression and lasso regression

    An overview of robust methods in medical research

    Get PDF
    Robust statistics is an extension of classical parametric statistics that specifically takes into account the fact that the assumed parametric models used by the researchers are only approximate. In this paper we review and outline how robust inferential procedures may routinely be applied in practice in the biomediacal research. Numerical illustrations are given for the t-test, regression models, logistic regression, survival analysis and ROC curves, showing that robust methods are often more appropriate than standard procedures

    Do Monetary Incentives and Chained Questions Affect the Validity of Risk Estimates Elicited via the Exchangeability Method? An Experimental Investigation

    Get PDF
    Using a laboratory experiment, we investigate the validity of stated risks elicited via the Exchangeability Method (EM) by defining a valuation method based on de Finetti’s notion of coherence. The reliability of risk estimates elicited through the EM has been theoretically questioned because the chained structure of the game, in which each question depends on the respondent’s answer to the previous one, is thought to potentially undermine the incentive compatibility of the elicitation mechanism even when real monetary incentives are provided. Our results suggest that superiority of real monetary incentives is not evident when people are presented with chained experimental designlab experiment, risk elicitation, exchangeability, validity, pesticide residue
    • …
    corecore