443 research outputs found

    A comparison of efficient permutation tests for unbalanced ANOVA in two by two designs--and their behavior under heteroscedasticity

    Full text link
    We compare different permutation tests and some parametric counterparts that are applicable to unbalanced designs in two by two designs. First the different approaches are shortly summarized. Then we investigate the behavior of the tests in a simulation study. A special focus is on the behavior of the tests under heteroscedastic variances.Comment: 20 pages, 9 figures, Working Paper of the Department of Management And Enigineering of the University of Padov

    Re-sampling strategy to improve the estimation of number of null hypotheses in FDR control under strong correlation structures

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>When conducting multiple hypothesis tests, it is important to control the number of false positives, or the False Discovery Rate (FDR). However, there is a tradeoff between controlling FDR and maximizing power. Several methods have been proposed, such as the q-value method, to estimate the proportion of true null hypothesis among the tested hypotheses, and use this estimation in the control of FDR. These methods usually depend on the assumption that the test statistics are independent (or only weakly correlated). However, many types of data, for example microarray data, often contain large scale correlation structures. Our objective was to develop methods to control the FDR while maintaining a greater level of power in highly correlated datasets by improving the estimation of the proportion of null hypotheses.</p> <p>Results</p> <p>We showed that when strong correlation exists among the data, which is common in microarray datasets, the estimation of the proportion of null hypotheses could be highly variable resulting in a high level of variation in the FDR. Therefore, we developed a re-sampling strategy to reduce the variation by breaking the correlations between gene expression values, then using a conservative strategy of selecting the upper quartile of the re-sampling estimations to obtain a strong control of FDR.</p> <p>Conclusion</p> <p>With simulation studies and perturbations on actual microarray datasets, our method, compared to competing methods such as q-value, generated slightly biased estimates on the proportion of null hypotheses but with lower mean square errors. When selecting genes with controlling the same FDR level, our methods have on average a significantly lower false discovery rate in exchange for a minor reduction in the power.</p

    Using Empirical Recurrence Rates Ratio For Time Series Data Similarity

    Full text link
    Several methods exist in classification literature to quantify the similarity between two time series data sets. Applications of these methods range from the traditional Euclidean type metric to the more advanced Dynamic Time Warping metric. Most of these adequately address structural similarity but fail in meeting goals outside it. For example, a tool that could be excellent to identify the seasonal similarity between two time series vectors might prove inadequate in the presence of outliers. In this paper, we have proposed a unifying measure for binary classification that performed well while embracing several aspects of dissimilarity. This statistic is gaining prominence in various fields, such as geology and finance, and is crucial in time series database formation and clustering studies
    • …
    corecore