8 research outputs found

    The effect of number of bootstrap samples, trimming proportion and distribution to the results in bootstrap-t and percentile bootstrap methods

    Get PDF
    Bootstrap yöntemler, kestiricinin veya test istatistiğinin dağılımının bilinmediği durumlarda çıkarsama yapılmasını sağlayan ve eldeki rassal örneklemden tekrarlı olarak yapılan seçimler ile yeni örneklemler türetme ilkesine dayanan yöntemlerdir. Bootstrap yöntemlerde; tekrar sayısı, budanmış ortalama içeren bir yöntem ile birlikte kullanılırlarsa budama yüzdesi ve kitle dağılımının yöntemin performansını nasıl etkilediği tartışılmakta olan konulardır [1-7]. Bu çalışmada; tek örneklem hipotez testi yapmak amacıyla Tukey-McLaughlin testinin [8] bootstrap-t ile birlikte kullanımı ve yüzdelik bootstrap, iki örneklem hipotez testi yapmak amacıyla ise Yuen testinin [9] bootstrap-t ile birlikte kullanımı ve yüzdelik bootstrap yöntemi kullanılmıştır. Bahsedilen yöntemlerin performansları; farklı tekrar sayıları, budama yüzdeleri ve kitle dağılımları kullanılarak gerçekleşen 1. Tip hata değerlerine göre karşılaştırılmıştır. Karşılaştırma bir simülasyon çalışması ve ayrıca iki gerçek veri seti ile yapılmıştır. Kitle budanmış ortalaması için tek ve iki örneklemde hipotez testi yöntemi, budama yüzdesi ve tekrar sayısı için öneriler geliştirilmiştir.Bootstrap methods are procedures which enable to make inference when the distribution of estimator or test statistics is unknown, and based on the principle of generating new samples by using the original random sample with replacement. In bootstrap methods, how number of bootstrap samples, trimming proportion if they are used with a method that involves trimmed mean and population distribution affect the performance of the methods are issues that have been discussed [1-7]. In this study; with the aim of performing one sample hypothesis testing use of Tukey-McLaughlin test [8] with bootstrap-t and percentile bootstrap, and with the aim of performing two samples hypothesis testing use of Yuen test [9] with bootstrap-t and percentile bootstrap are used. The performances of these methods are compared in terms of actual type 1 error rates by using different number of bootstrap samples, trimming proportions and population distributions. The comparison is done with a simulation study by using theoretical distributions and two real data sets. Suggestions for the method to be used, trimming proportion and number of bootstrap samples are developed

    Gözde NAVRUZ 1, * , A. Fırat Özdemir1

    No full text
    The frequently used way of comparing two independent groups is to compare in terms of some measure of location such as mean. For non-normal and heteroscedastic cases, trimmed mean, median or some other robust measures of location can be used instead. However, determination of the differences in the tails of the groups might be of interest. For this reason, comparing the lower and upper quantiles becomes an important issue. In this study, Harrell-Davis estimator and the default quantile estimator of R are compared in terms of actual Type I error rates. When quantiles close to zero or one are compared with small sample sizes Gumbel's estimator, and when quantiles close to median are compared with large sample sizes Harrell Davis estimator saved actual Type I error rate better

    Analysis and Comparison of Six Robust Regression Techniques

    No full text
    The ordinary least squares regression can be misleading when there are outliers, heteroscedasticity and non-normality. Problems with ordinary least squares are briefly explained and six robust regression techniques, Theil-Sen, least median of squares, least trimmed squares, least absolute value, least trimmed absolute value and M-regression, that are not affected by these common problems are investigated and compared in terms of actual significance level and relative efficiency over ordinary least squares. Results are discussed and some recommendations are given.</p

    Tests of Location Equality Under Non-Identical Distributions

    No full text
    The ANOVA-F test is the most known procedure for comparing at least three population means. However, this conventional test might give misleading results when it’s underlying assumptions are violated. In this study, Welch’s test with trimmed mean, Welch’s test with trimmed mean and a bootstrap-t, newly proposed test and ANOVA-F test were compared in terms of actual Type I error rates under not only non-normality and heteroscedasticity, but also with non-identical distribution shapes. The newly proposed method outperformed ANOVA-F and other alternatives under various situations

    Searching the differences through the tails of distributions using an approach based on Mahalanobis distance and percentile bootstrap

    No full text
    One of the main objectives of applied statistics is to determine whether two independent groups differ, and if so, to understand how they do. The Student t-test is the conventional method for doing this; however, it merely depends on a sensitive measure of location and a sampling distribution with some restrictive assumptions. There are some robust alternatives in the literature, but they concentrate on just one reference point over the entire distribution as well. It is often of interest to analyze the differences that occur in the tails of distributions, namely the quantiles. In this study, two independent groups are compared using a method based on a Mahalanobis distance and a percentile bootstrap approach. Two recently proposed and two well-known quantile estimators are used with the proposed method. Results for both theoretical distributions and real data sets are examined. Using the proposed method in conjunction with the Navruz and Özdemir (NO) estimator saved nominal Type-I error in all but five experimental cases out of 61 and it could potentially offer valuable insights into detecting the differences that occurred in the tails of the distributions.</p
    corecore