617,399 research outputs found

    A nonparametric empirical Bayes framework for large-scale multiple testing

    Full text link
    We propose a flexible and identifiable version of the two-groups model, motivated by hierarchical Bayes considerations, that features an empirical null and a semiparametric mixture model for the non-null cases. We use a computationally efficient predictive recursion marginal likelihood procedure to estimate the model parameters, even the nonparametric mixing distribution. This leads to a nonparametric empirical Bayes testing procedure, which we call PRtest, based on thresholding the estimated local false discovery rates. Simulations and real-data examples demonstrate that, compared to existing approaches, PRtest's careful handling of the non-null density can give a much better fit in the tails of the mixture distribution which, in turn, can lead to more realistic conclusions.Comment: 18 pages, 4 figures, 3 table

    The power of surrogate data testing with respect to non-stationarity

    Full text link
    Surrogate data testing is a method frequently applied to evaluate the results of nonlinear time series analysis. Since the null hypothesis tested against is a linear, gaussian, stationary stochastic process a positive outcome may not only result from an underlying nonlinear or even chaotic system, but also from e.g. a non-stationary linear one. We investigate the power of the test against non-stationarity.Comment: 4 pages, 4 figures, to appear in PR

    Approaches for the Joint Evaluation of Hypothesis Tests: Classical Testing, Bayes Testing, and Joint Confirmation

    Get PDF
    The occurrence of decision problems with changing roles of null and alternative hypotheses has increased interest in extending the classical hypothesis testing setup. Particularly, confirmation analysis has been in the focus of some recent contributions in econometrics. We emphasize that confirmation analysis is grounded in classical testing and should be contrasted with the Bayesian approach. Differences across the three approaches – traditional classical testing, Bayes testing, joint confirmation – are highlighted for a popular testing problem. A decision is searched for the existence of a unit root in a time-series process on the basis of two tests. One of them has the existence of a unit root as its null hypothesis and its non-existence as its alternative, while the roles of null and alternative are reversed for the other hypothesis test.Confirmation analysis, Decision contours, Unit roots

    Non-parametric specification testing of non-nested econometric models

    Get PDF
    We consider the non-nested testing prqblem of non-parametric regressions. We show that, when the regression functions are unknown under both the null and the alternative hypotheses, an extension of the J-test procedure of Davidson and Mackinnon (1981) will lead to a test statistic with well defined asymptotic properties. The derivation of the test statistic involves double kernel estimation. Monte Carlo simulations suggest that the test has good size and power characteristics

    Estimating the null distribution for conditional inference and genome-scale screening

    Full text link
    In a novel approach to the multiple testing problem, Efron (2004; 2007) formulated estimators of the distribution of test statistics or nominal p-values under a null distribution suitable for modeling the data of thousands of unaffected genes, non-associated single-nucleotide polymorphisms, or other biological features. Estimators of the null distribution can improve not only the empirical Bayes procedure for which it was originally intended, but also many other multiple comparison procedures. Such estimators serve as the groundwork for the proposed multiple comparison procedure based on a recent frequentist method of minimizing posterior expected loss, exemplified with a non-additive loss function designed for genomic screening rather than for validation. The merit of estimating the null distribution is examined from the vantage point of conditional inference in the remainder of the paper. In a simulation study of genome-scale multiple testing, conditioning the observed confidence level on the estimated null distribution as an approximate ancillary statistic markedly improved conditional inference. To enable researchers to determine whether to rely on a particular estimated null distribution for inference or decision making, an information-theoretic score is provided that quantifies the benefit of conditioning. As the sum of the degree of ancillarity and the degree of inferential relevance, the score reflects the balance conditioning would strike between the two conflicting terms. Applications to gene expression microarray data illustrate the methods introduced

    Testing the nullity of GARCH coefficients : correction of the standard tests and relative efficiency comparisons

    Get PDF
    This article is concerned by testing the nullity of coefficients in GARCH models. The problem is non standard because the quasi-maximum likelihood estimator is subject to positivity constraints. The paper establishes the asymptotic null and local alternative distributions of Wald, score, and quasi-likelihood ratio tests. Efficiency comparisons under fixed alternatives are also considered. Two cases of special interest are: (i) tests of the null hypothesis of one coefficient equal to zero and (ii) tests of the null hypothesis of no conditional heteroscedasticity. Finally, the proposed approach is used in the analysis of a set of financial data and leads to reconsider the preeminence of GARCH(1,1) among GARCH models.Asymptotic efficiency of tests; Boundary; Chi-bar distribution; GARCH model; Quasi Maximum Likelihood Estimation; Local alternatives
    corecore