125 research outputs found

    An Adaptive Inference Strategy: The Case of Auditory Data

    Get PDF
    By way of an example some of the basic features in the derivation and use of adaptive inferential methods are demonstrated. The focus of this paper is dyadic (coupled) data in auditory and perceptual research. We present: (a) why one should not use the conventional methods, (b) a derivation of an adaptive method, and (c) how the new adaptive method works with the example data. In the concluding remarks we draw attention to the work of Professor George Barnard who provided the adaptive inference strategy in the context of the Behrens-Fisher problem -- testing the equality of means when one doesn\u27t want to assume that the variances are equal

    The Impact of Predictor Variable(s) with Skewed Cell Probabilities on Wald Tests in Binary Logistic Regression

    Get PDF
    A series of simulation studies are reported that investigated the impact of a skewed predictor(s) on the Type I error rate and power of the Wald test in a logistic regression model. Five simulations were conducted for three different regression models. A detailed description of the impact of skewed cell predictor probabilities and sample size provide guidelines for practitioners wherein to expect the greatest problems

    Using Pratt\u27s Importance Measures in Confirmatory Factor Analyses

    Get PDF
    When running a confirmatory factor analysis (CFA), users specify and interpret the pattern (loading) matrix. It has been recommended that the structure coefficients, indicating the factors’ correlation with the observed indicators, should also be reported when the factors are correlated (Graham, Guthrie, & Thompson, 2003; Thompson, 1997). The aims of this article are: (1) to note the structure coefficient should be interpreted with caution if the factors are specified to correlate. Because the structure coefficient is a zero-order correlation, it may be partially or entirely a reflection of factor correlations. This is elucidated by the matrix algebra of the structure coefficients based on the example in Graham et al. (2003). (2) The second aim is to introduce the method of Pratt’s (1987) importance measures to be used in a CFA. The method uses the information in the structure coefficients, along with the pattern coefficients, into unique measures that are not confounded by the factor correlations. These importance measures indicate the proportions of the variation in an observed indicator that are attributable to the factors – an interpretation analogous to the effect size measure of eta-squared. The importance measures can further be transformed to eta correlations, a measure of unique directional correlation of a factor with an observed indicator. This is illustrated with a real data example

    Calibration of Measurements

    Get PDF
    Traditional notions of measurement error typically rely on a strong mean-zero assumption on the expectation of the errors conditional on an unobservable “true score” (classical measurement error) or on the data themselves (Berkson measurement error). Weakly calibrated measurements for an unobservable true quantity are defined based on a weaker mean-zero assumption, giving rise to a measurement model of differential error. Applications show it retains many attractive features of estimation and inference when performing a naive data analysis (i.e. when performing an analysis on the error-prone measurements themselves), and other interesting properties not present in the classical or Berkson cases. Applied researchers concerned with measurement error should consider weakly calibrated errors and rely on the stronger formulations only when both a stronger model\u27s assumptions are justifiable and would result in appreciable inferential gains

    Aligned Rank Tests for Interactions in Split-Plot Designs: Distributional Assumptions and Stochastic Heterogeneity

    Get PDF
    Three aligned rank methods for transforming data from multiple group repeated measures (split-plot) designs are reviewed. Univariate and multivariate statistics for testing the interaction in split-plot designs are elaborated. Computational examples are presented to provide a context for performing these ranking procedures and statistical tests. SAS/IML and SPSS syntax code to perform the procedures is included in the Appendix

    Operating Characteristics Of The DIF MIMIC Approach Using Jöreskog’s Covariance Matrix With ML And WLS Estimation For Short Scales

    Get PDF
    Type I error rate of a structural equation modeling (SEM) approach for investigating differential item functioning (DIF) in short scales was studied. Muthén’s SEM model for DIF was examined using a covariance matrix (Jöreskog, 2002). It is conditioned on the latent variable, while testing the effect of the grouping variable over-and-above the underlying latent variable. Thus, it is a multiple-indicators, multiple-causes (MIMIC) DIF model. Type I error rates were determined using data reflective of short scales with ordinal item response formats typically found in the social and behavioral sciences. Results indicate Type I error rates for the DIF MIMIC model, as implemented in LISREL, are inflated for both estimation methods for the design conditions examined

    Multi-Group Confirmatory Factor Analysis for Testing Measurement Invariance in Mixed Item Format Data

    Get PDF
    This simulation study investigated the empirical Type I error rates of using the maximum likelihood estimation method and Pearson covariance matrix for multi-group confirmatory factor analysis (MGCFA) of full and strong measurement invariance hypotheses with mixed item format data that are ordinal in nature. The results indicate that mixed item formats and sample size combinations do not result in inflated empirical Type I error rates for rejecting the true measurement invariance hypotheses. Therefore, although the common methods are in a sense sub-optimal, they don’t lead to researchers claiming that measures are functioning differently across groups – i.e., a lack of measurement invariance

    Quantifying Bimodality Part 2: A Likelihood Ratio Test for the Comparison of a Unimodal Normal Distribution and a Bimodal Mixture of Two Normal Distributions. Bruno D. Zumbo is

    Get PDF
    Scientists in a variety of fields are often faced with the question of whether a sample is best described as unimodal or bimodal. In an earlier paper (Frankland & Zumbo, 2002), a simple and convenient method for assessing bimodality was described. That method is extended by developing and demonstrating a likelihood ratio test (LRT) for bimodality for the comparison of a unimodal normal distribution and a bimodal mixture of two normal distributions. As in Frankland and Zumbo (2002), the LRT approach is demonstrated using algorithms in SPSS

    Quantifying Bimodality Part I: An Easily Implemented Method Using \u3cem\u3eSPSS\u3c/em\u3e

    Get PDF
    Scientists in a variety of fields are faced with the question of whether or not a particular sample of data are best described as unimodal or bimodal. We provide a simple and convenient method for assessing bimodality. The use of the non-linear algorithms in SPSS for modeling complex mixture distributions is demonstrated on a unimodal normal distribution (with 2 free parameters) and on bimodal mixture of two normal distributions (with 5 free parameters)

    Resolving the Issue of How Reliability is Related to Statistical Power: Adhering to Mathematical Definitions

    Get PDF
    Reliability in classical test theory is a population-dependent concept, defined as a ratio of true-score variance and observed-score variance, where observed-score variance is a sum of true and error components. On the other hand, the power of a statistical significance test is a function of the total variance, irrespective of its decomposition into true and error components. For that reason, the reliability of a dependent variable is a function of the ratio of true-score variance and observed-score variance, whereas statistical power is a function of the sum of the same two variances. Controversies about how reliability is related to statistical power often can be explained by authors’ use of the term “reliability” in a general way to mean “consistency,” “precision,” or “dependability,” which does not always correspond to its mathematical definition as a variance ratio. The present note shows how adherence to the mathematical definition can help resolve the issue and presents some derivations and illustrative examples that have further implications for significance testing and practical research
    • …
    corecore