2,846,415 research outputs found

    Testing for change-points in long-range dependent time series by means of a self-normalized Wilcoxon test

    Get PDF
    We propose a testing procedure based on the Wilcoxon two-sample test statistic in order to test for change-points in the mean of long-range dependent data. We show that the corresponding self-normalized test statistic converges in distribution to a non-degenerate limit under the hypothesis that no change occurred and that it diverges to infinity under the alternative of a change-point with constant height. Furthermore, we derive the asymptotic distribution of the self-normalized Wilcoxon test statistic under local alternatives, that is under the assumption that the height of the level shift decreases as the sample size increases. Regarding the finite sample performance, simulation results confirm that the self-normalized Wilcoxon test yields a consistent discrimination between hypothesis and alternative and that its empirical size is already close to the significance level for moderate sample sizes

    Testing for Common Breaks in a Multiple Equations System

    Full text link
    The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations. Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations. The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size, allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness.Comment: 44 pages, 2 tables and 1 figur

    Power in High-Dimensional Testing Problems

    Full text link
    Fan et al. (2015) recently introduced a remarkable method for increasing asymptotic power of tests in high-dimensional testing problems. If applicable to a given test, their power enhancement principle leads to an improved test that has the same asymptotic size, uniformly non-inferior asymptotic power, and is consistent against a strictly broader range of alternatives than the initially given test. We study under which conditions this method can be applied and show the following: In asymptotic regimes where the dimensionality of the parameter space is fixed as sample size increases, there often exist tests that can not be further improved with the power enhancement principle. However, when the dimensionality of the parameter space increases sufficiently slowly with sample size and a marginal local asymptotic normality (LAN) condition is satisfied, every test with asymptotic size smaller than one can be improved with the power enhancement principle. While the marginal LAN condition alone does not allow one to extend the latter statement to all rates at which the dimensionality increases with sample size, we give sufficient conditions under which this is the case.Comment: 27 page

    An investigation of skewness, sample size, and test standardisation

    Get PDF
    In psychological assessment, a raw score transformation is the first step in the clinical decision-making process. During this process, clinicians will transform a raw score typically using linear standardised scores and a normative sample with characteristics similar to their client. While the literature stresses the importance of using adequate normative data, little research has evaluated the effect skewness has on sample size. Currently the consensus is that a sample size of 50 is deemed adequate for normative data. However, an alarming number of studies that present normative data have much smaller sample sizes particularly when the data is stratified by age, gender, and/or education. Additionally, the use of linear transformations onto a normal distribution introduces further problems the more positively or negatively skewed the normative raw score distribution is. Skewed distributions are commonly encountered in neuropsychology and accordingly their deviation from a normal distribution should be considered during the clinical decision-making process. The primary goal of the current thesis was therefore to evaluate the psychometric issues related to the standardisation process. In particular to investigate the current understanding of sample sizes in neuropsychological samples, assess how this is influenced by different skewed distributions, and evaluate the potential errors involved in the decision-making process. Three studies were conducted. The first study explored the minimum sample size needed to produce stable measures of central tendency and variance for a range of distributions. Results indicated that the optimal sample size required was dependent on the level of skewness of the distribution and was not the often cited N = 50. For normally distributed data, a sample size of 70 is required in each cell in order to produce stable means and standard deviations. Negatively or positively skewed distributions required sample sizes that ranged from 30 to 80 in each cell. This study highlighted the inadequacy of currently available normative data and called for further normative research to be conducted. The second study evaluated the errors introduced when using three different linear transformations on different skewed distributions with adequate sample sizes. Seven tests with differing skewness coefficients were evaluated using the z score transformation, a t-test method developed by Crawford and Howell (1998), and a median z score transformation developed for this research. Results indicated that the traditional z score transformation produced the least errors of the three methods. However, for highly positively skewed distributions, the use of this transformation introduces considerable error in the clinical decision-making process. A regression equation was derived as a tool for clinicians to help correct adequate data for the effect of skewness. The final study evaluated whether using different linear transformations created substantial errors when using normative data that ranged in skewness and that had sample sizes less than those recommended from Study One. This study is particularly important given the common practice in neuropsychology for at least some measures to be derived from the clinical research literature utilising inadequate sample sizes. Results indicated that the error in judgement when using the preferred z score transformation is nearly doubled in positively skewed distributions. It was recommended that normative data with sample sizes less than 30 should not be used in clinical practice and guidelines were proposed for incorporating issues of sample size and skewness into their testing practices. It is hoped that clinicians will adopt the findings and subsequent recommendations of these studies in order to improve the current standards of clinical decision-making in neuropsychology

    Testing for common breaks in a multiple equations system

    Full text link
    The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations. Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations. The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size, allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness.Accepted manuscrip
    • …
    corecore