72,949 research outputs found

    Phi-divergence statistics for the likelihood ratio order: an approach based on log-linear models

    Full text link
    When some treatments are ordered according to the categories of an ordinal categorical variable (e.g., extent of side effects) in a monotone order, one might be interested in knowing wether the treatments are equally effective or not. One way to do that is to test if the likelihood ratio order is strictly verified. A method based on log-linear models is derived to make statistical inference and phi-divergence test-statistics are proposed for the test of interest. Focussed on loglinear modeling, the theory associated with the asymptotic distribution of the phi-divergence test-statistics is developed. An illustrative example motivates the procedure and a simulation study for small and moderate sample sizes shows that it is possible to find phi-divergence test-statistic with an exact size closer to nominal size and higher power in comparison with the classical likelihood ratio

    Prior distributions for objective Bayesian analysis

    Get PDF
    We provide a review of prior distributions for objective Bayesian analysis. We start by examining some foundational issues and then organize our exposition into priors for: i) estimation or prediction; ii) model selection; iii) highdimensional models. With regard to i), we present some basic notions, and then move to more recent contributions on discrete parameter space, hierarchical models, nonparametric models, and penalizing complexity priors. Point ii) is the focus of this paper: it discusses principles for objective Bayesian model comparison, and singles out some major concepts for building priors, which are subsequently illustrated in some detail for the classic problem of variable selection in normal linear models. We also present some recent contributions in the area of objective priors on model space.With regard to point iii) we only provide a short summary of some default priors for high-dimensional models, a rapidly growing area of research

    Generalized Wald-type Tests based on Minimum Density Power Divergence Estimators

    Get PDF
    In testing of hypothesis the robustness of the tests is an important concern. Generally, the maximum likelihood based tests are most efficient under standard regularity conditions, but they are highly non-robust even under small deviations from the assumed conditions. In this paper we have proposed generalized Wald-type tests based on minimum density power divergence estimators for parametric hypotheses. This method avoids the use of nonparametric density estimation and the bandwidth selection. The trade-off between efficiency and robustness is controlled by a tuning parameter β\beta. The asymptotic distributions of the test statistics are chi-square with appropriate degrees of freedom. The performance of the proposed tests are explored through simulations and real data analysis.Comment: 26 pages, 10 figures. arXiv admin note: substantial text overlap with arXiv:1403.033
    • …
    corecore