4 research outputs found

    A density based empirical likelihood approach for testing bivariate normality

    No full text
    <p>Sample entropy based tests, methods of sieves and Grenander estimation type procedures are known to be very efficient tools for assessing normality of underlying data distributions, in one-dimensional nonparametric settings. Recently, it has been shown that the density based empirical likelihood (EL) concept extends and standardizes these methods, presenting a powerful approach for approximating optimal parametric likelihood ratio test statistics, in a distribution-free manner. In this paper, we discuss difficulties related to constructing density based EL ratio techniques for testing bivariate normality and propose a solution regarding this problem. Toward this end, a novel bivariate sample entropy expression is derived and shown to satisfy the known concept related to bivariate histogram density estimations. Monte Carlo results show that the new density based EL ratio tests for bivariate normality behave very well for finite sample sizes. To exemplify the excellent applicability of the proposed approach, we demonstrate a real data example.</p

    A Cautionary Note on Beta Families of Distributions and the Aliases Within

    No full text
    <p>In this note we examine the four parameter beta family of distributions in the context of the beta-normal and beta-logistic distributions. In the process we highlight the concept of numerical and limiting alias distributions, which in turn relate to numerical instabilities in the numerical maximum likelihood fitting routines for these families of distributions. We conjecture that the numerical issues pertaining to fitting these multiparameter distributions may be more widespread than has originally been reported across several families of distributions.</p

    A Characterization of Most(More) Powerful Test Statistics with Simple Nonparametric Applications

    No full text
    Data-driven most powerful tests are statistical hypothesis decision-making tools that deliver the greatest power against a fixed null hypothesis among all corresponding data-based tests of a given size. When the underlying data distributions are known, the likelihood ratio principle can be applied to conduct most powerful tests. Reversing this notion, we consider the following questions. (a) Assuming a test statistic, say T, is given, how can we transform T to improve the power of the test? (b) Can T be used to generate the most powerful test? (c) How does one compare test statistics with respect to an attribute of the desired most powerful decision-making procedure? To examine these questions, we propose one-to-one mapping of the term” most powerful” to the distribution properties of a given test statistic via matching characterization. This form of characterization has practical applicability and aligns well with the general principle of sufficiency. Findings indicate that to improve a given test, we can employ relevant ancillary statistics that do not have changes in their distributions with respect to tested hypotheses. As an example, the present method is illustrated by modifying the usual t-test under nonparametric settings. Numerical studies based on generated data and a real-data set confirm that the proposed approach can be useful in practice.</p

    Data-Driven Confidence Interval Estimation Incorporating Prior Information with an Adjustment for Skewed Data

    No full text
    <p>Bayesian credible interval (CI) estimation is a statistical procedure that has been well addressed in both the theoretical and applied literature. Parametric assumptions regarding baseline data distributions are critical for the implementation of this method. We provide a nonparametric technique for incorporating prior information into the equal-tailed (ET) and highest posterior density (HPD) CI estimators in the Bayesian manner. We propose to use a data-driven likelihood function, replacing the parametric likelihood function to create a distribution-free posterior. Higher order asymptotic propositions are derived to show the efficiency and consistency of the proposed method. We demonstrate that the proposed approach may correct confidence regions with respect to skewness of the data distribution. An extensive Monte Carlo (MC) study confirms the proposed method significantly outperforms the classical CI estimation in a frequentist context. A real data example related to a study of myocardial infarction illustrates the excellent applicability of the proposed technique. Supplementary material, including the R code used to implement the developed method, is available online.</p
    corecore