565 research outputs found

    Likelihood, Replicability, and Robbins' Confidence Sequences

    Get PDF
    The widely claimed replicability crisis in science may lead to revised standards of significance. The customary frequentist confidence intervals, calibrated through hypothetical repetitions of the experiment that is supposed to have produced the data at hand, rely on a feeble concept of replica- bility. In particular, contradictory conclusions may be reached when a substantial enlargement of the study is undertaken. To redefine statistical confidence in such a way that inferential conclusions are non-contradictory, with large enough probability, under enlargements of the sample, we give a new reading of a proposal dating back to the 60s, namely, Robbins\u2019 confidence sequences. Directly bounding the probability of reaching, in the future, conclusions that contradict the current ones, Robbins\u2019 confidence sequences ensure a clear-cut form of replicability when inference is performed on accumulating data. Their main frequentist property is easy to understand and to prove. We show that Robbins\u2019 confidence sequences may be justified under various views of inference: they are likelihood-based, can incorporate prior information and obey the strong likelihood principle. They are easy to compute, even when inference is on a parameter of interest, especially using a closed form approximation from normal asymptotic theory

    On the use of pseudo-likelihoods in Bayesian variable selection.

    Get PDF
    In the presence of nuisance parameters, we discuss a one-parameter Bayesian analysis based on a pseudo-likelihood assuming a default prior distribution for the parameter of interest only. Although this way to proceed cannot always be considered as orthodox in the Bayesian perspective, it is of interest to evaluate whether the use of suitable pseudo-likelihoods may be proposed for Bayesian inference. Attention is focused in the context of regression models, in particular on inference about a scalar regression coefficient in various multiple regression models, i.e. scale and regression models with non-normal errors, non-linear normal heteroscedastic regression models, and log-linear models for count data with overdispersion. Some interesting conclusions emerge

    A Neyman-Scott phenomenon in model discrimination.

    Get PDF
    The aim of this paper is to show through simulation that a Neyman-Scott phenomenon may occur in discriminating among separate stratified models. We focus on models which are scale families in each stratum. We consider traditional model selection procedures, such as the Akaike and Takeuchi information criteria, together with procedures based on the marginal likelihood and its Laplace approximation. We perform two simulation studies. Results indicate that, when the sample size in each stratum is fixed and the number of strata increases, correct selection probabilities for traditional model selection criteria may approach zero. On the other hand, model selection based on exact or approximate marginal likelihoods, that exploit invariance, can behave far better

    On a likelihood interpretation of adjusted profile likelihoods through refined predictive densities.

    Get PDF
    In this paper a second-order link between adjusted profile likelihoods and refinements of the estimative predictive density is shown. The result provides a new straightforward interpretation for modified profile likelihoods, that complements results in Severini (1998a) and in Pace and Salvan (2006). Moreover, it outlines a form of consistency to second order between likelihood theory and prediction in frequentist inference

    Efficient composite likelihood for a scalar parameter of interest

    Get PDF
    For inference in complex models, composite likelihood combines genuine likelihoods based on Q3 low-dimensional portions of the data, with weights to be chosen. Optimal weights in composite likelihood may be searched following different routes, leading to a solution only in scalar parametermodels. Here, after briefly reviewing themain approaches, we show how to obtain the first-order optimal weights when using composite likelihood for inference on a scalar parameter in the presence of nuisance parameters. These weights depend on the true parameter value and need to be estimated. Under regularity conditions, the resulting likelihood ratio statistic has the standard asymptotic null distribution and improved local power. Simulation results inmultivariate normal models show that estimation of optimal weights maintains the standard approximate null distribution and produces a visible gain in power with respect to constant weights

    Likelihood theory, prediction, model selection: asymptotic connections.

    Get PDF
    Plug-in estimation and corresponding refinements involving penalisation have been considered in various areas of parametric statistical inference. One major example is adjustment of the profile likelihood for inference in the presence of nuisance parameters. Another important setting is prediction, where improved estimative predictive densities have been recently developed. A third related setting is model selection, where information criteria based on penalisation of maximised likelihood have been proposed starting from the pioneering contribution of Akaike. The seminal contributions in the last setting predate those introducing the former two classes of procedures, and pertinent portions of literature seem to have evolved quite independently. The aim of this paper is to establish some simple asymptotic connections among these classes of procedures. In particular, all the three kinds of penalisations involved can be viewed as bias corrections of plug-in estimates of theoretical target criteria which are shown to be very closely connected. As a by-product, we obtain adjusted profile likelihoods from optimal predictive densities. Links between adjusted procedures in likelihood theory and model selection procedures are also briefly enquired throuh some simulation studies

    Parametric bootstrap inference for stratified models with high-dimensional nuisance specifications

    Get PDF
    Inference about a scalar parameter of interest typically relies on the asymptotic normality of common likelihood pivots, such as the signed likelihood root, the score and Wald statistics. Nevertheless, the resulting inferential procedures are known to perform poorly when the dimension of the nuisance parameter is large relative to the sample size and when the information about the parameters is limited. In many such cases, the use of asymptotic normality of analytical modifications of the signed likelihood root is known to recover inferential performance. It is proved here that parametric bootstrap of standard likelihood pivots results in as accurate inferences as analytical modifications of the signed likelihood root do in stratified models with stratum specific nuisance parameters. We focus on the challenging case where the number of strata increases as fast or faster than the stratum samples size. It is also shown that this equivalence holds regardless of whether constrained or unconstrained bootstrap is used
    • …
    corecore