51 research outputs found

    A Characterization of Most(More) Powerful Test Statistics with Simple Nonparametric Applications

    Full text link
    Data-driven most powerful tests are statistical hypothesis decision-making tools that deliver the greatest power against a fixed null hypothesis among all corresponding data-based tests of a given size. When the underlying data distributions are known, the likelihood ratio principle can be applied to conduct most powerful tests. Reversing this notion, we consider the following questions. (a) Assuming a test statistic, say T, is given, how can we transform T to improve the power of the test? (b) Can T be used to generate the most powerful test? (c) How does one compare test statistics with respect to an attribute of the desired most powerful decision-making procedure? To examine these questions, we propose one-to-one mapping of the term 'Most Powerful' to the distribution properties of a given test statistic via matching characterization. This form of characterization has practical applicability and aligns well with the general principle of sufficiency. Findings indicate that to improve a given test, we can employ relevant ancillary statistics that do not have changes in their distributions with respect to tested hypotheses. As an example, the present method is illustrated by modifying the usual t-test under nonparametric settings. Numerical studies based on generated data and a real-data set confirm that the proposed approach can be useful in practice.Comment: Accepte

    Two-sample density-based empirical likelihood ratio tests based on paired data, with application to a treatment study of Attention- Deficit/Hyperactivity Disorder and Severe Mood Dysregulation

    Get PDF
    Abstract It is a common practice to conduct medical trials in order to compare a new therapy with a standard-ofcare based on paired data consisted of pre-and post-treatment measurements. In such cases, a great interest often lies in identifying treatment effects within each therapy group as well as detecting a between-group difference. In this article, we propose exact nonparametric tests for composite hypotheses related to treatment effects to provide efficient tools that compare study groups utilizing paired data. When correctly specified, parametric likelihood ratios can be applied, in an optimal manner, to detect a difference in distributions of two samples based on paired data. The recent statistical literature introduces density-based empirical likelihood methods to derive efficient nonparametric tests that approximate most powerful Neyman-Pearson decision rules. We adapt and extend these methods to deal with various testing scenarios involved in the two-sample comparisons based on paired data. We show the proposed procedures outperform classical approaches. An extensive Monte Carlo study confirms that the proposed approach is powerful and can be easily applied to a variety of testing problems in practice. The proposed technique is applied for comparing two therapy strategies to treat children's attention deficit/hyperactivity disorder and severe mood dysregulation

    Linear Regression With an Independent Variable Subject to a Detection Limit

    Get PDF
    Linear regression with a left-censored independent variable X due to limit of detection (LOD) was recently considered by 2 groups of researchers: Richardson and Ciampi, and Schisterman and colleagues

    Approximations to generalized renewal measures

    No full text
    Let {Zj, j[greater-or-equal, slanted]1} be a sequence of nonnegative continuous random variables. Given an arbitrary function g : [0,[infinity])-->[0,[infinity]), a renewal function associated with this sequence is defined asDue to possible complexity of calculating the probabilities P{ZjRenewal theory Renewal measure Smith's theorem Tauber's theorem
    • …
    corecore