219 research outputs found

    Psychological correlates of fatigue: Examining depression, perfectionism, and automatic negative thoughts

    Get PDF
    This study investigated whether depression, perfectionism or automatic negative thoughts would correlate with the symptomatology of fatigue in a non-clinical population. A structural model was developed to determine if depression or latent constructs of perfectionism and automatic negative thoughts would correlate with four components of fatigue (emotional distress, somatic symptomatology, general fatigue and cognitive difficulties). It was found that all aspects of fatigue were significantly correlated with depression and automatic negative thoughts, whereas only emotional distress and cognitive difficulties were correlated with perfectionism.Social Sciences and Humanities Research Counci

    Pairwise multiple comparisons: New yardstick, new results

    Get PDF
    Behavioral science researchers often wish to compare the means of several treatment conditions on a specific dependent measure. The author used a Monte Carlo study to compare familywise error controlling multiple comparison procedures (MCPs; Tukey, Bonferroni) with MCPs that were not developed to control the familywise error rate on the probability of correctly identifying the true underlying population mean configuration (true model rate). Recently proposed MCPs that are not intended to control the familywise error rate had consistently larger true model rates than did familywise error controlling MCPs. Furthermore, of the familywise error controlling MCPs investigated, the popular Tukey and Bonferroni MCPs had consistently lower true model rates than did other familywise error controlling MCPs.Social Sciences and Humanities Research Counci

    Multiplicity control, school uniforms, and other perplexing debates.

    Get PDF
    Researchers in psychology are frequently confronted with the issue of analyzing multiple relationships simultaneously. For example, this could involve multiple outcome variables or multiple predictors in a regression framework. Current recommendations typically steer researchers toward familywise or false discovery rate Type I error control in order to limit the probability of incorrectly rejecting the null hypothesis. Stepwise modified-Bonferroni procedures are suggested for following this recommendation. However, longstanding arguments against multiplicity control, combined with a modern distaste for null hypothesis significance testing, have warranted revisiting this debate. This paper explores both sides of the multiplicity control debate with the goal of educating concerned parties regarding best practices for conducting multiple related tests.SSHR

    The effects of heteroscedasticity on tests of equivalence

    Get PDF
    Tests of equivalence, which are designed to assess the similarity of group means, are becoming more popular, yet very little is known about the statistical properties of these tests. Monte Carlo methods are used to compare the test of equivalence proposed by Schuirmann with modified tests of equivalence that incorporate a heteroscedastic error term. It was found that the latter were more accurate than the Schuirmann test in detecting equivalence when sample sizes and variances were unequal.Social Sciences and Humanities Research Counci

    Multiplicity control in structural equation modeling

    Get PDF
    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were compared with rates when no multiplicity control was imposed. The results indicate that Type I error rates become severely inflated with no multiplicity control, but also that familywise error controlling procedures were extremely conservative and had very little power for detecting true relations. False discovery rate controlling procedures provided a compromise between no multiplicity control and strict familywise error control and with large sample sizes provided a high probability of making correct inferences regarding all the parameters in the model.Social Sciences and Humanities Research Counci

    Group Level Clinical Significance: An Analysis of Current Practice

    Get PDF
    Measures of clinical significance offer important information about psychological interventions that cannot be garnered from tests of the statistical significance of the change from pretest to posttest. For example, post-intervention comparisons to a nonclinical group often offer valuable information about the practical value of the change that occurred. This study explored the manner in which researchers conduct clinical significance analyses in an effort to summarize the effectiveness of an intervention at the group level. The focus was on the use of the original Jacobson and Truax (Journal of Consulting and Clinical Psychology, 59, 12–19, 1991) method and the normative comparisons method due to Kendall et al. (Journal of Consulting and Clinical Psychology, 67, 285–299, 1999). The results highlight that although the Jacobson and Truax method is routinely adopted for summarizing group-level clinical significance, advanced strategies for summarizing the results are very infrequently applied. Further, the Kendall et al. method, which provides valuable and distinct information regarding how the treated group is performing relative to a normal comparison group, is rarely adopted and even when it is it is often not conducted appropriately. Recommendations are provided for conducting group-level clinical significance analyses.Social Sciences and Humanities Research Counci

    Recommendations for applying tests of equivalence

    Get PDF
    Researchers in psychology reliably select traditional null hypothesis significance tests (e.g., Student's t test), regardless of whether the research hypothesis relates to whether the group means are equivalent or whether the group means are different. Tests of equivalence, which have been popular in biopharmaceutical studies for years, have recently been introduced and recommended to researchers in psychology for demonstrating the equivalence of two group means. However, very few recommendations exist for applying tests of equivalence. A Monte Carlo study was used to compare the test of equivalence proposed by Schuirmann with the traditional Student t test for deciding if two group means are equivalent. It was found that Schuirmann's test of equivalence is more effective than Student's t test at detecting population mean equivalence with large sample sizes; however, Schuirmann's test of equivalence performs poorly relative to Student's t test with small sample sizes and/or inflated variances.Social Sciences and Humanities Research Counci

    The expanding role of quantitative methodologists in advancing psychology

    Get PDF
    Research designs in psychology have become increasingly complex; thus, the methods for analysing the data have also become more complex. It is unrealistic for departments of psychology to expect research psychologists to stay informed about all the advances in statistical methods that apply to their field of research; therefore, departments must improve the profile of quantitative methods to ensure that adequate statistical resources are available to faculty. In this article, we discuss the challenges involved in improving the profile of quantitative methods given the drastic decreases in quantitative methods faculty, students, and graduate programs over the past couple decades, and discuss the importance of reversing. this trend through improving awareness of the field of quantitative methods in psychologySocial Sciences and Humanities Research Council (SSHRC

    The variance homogeneity assumption and the traditional ANOVA: Exploring a better gatekeeper.

    Get PDF
    Valid use of the traditional independent samples ANOVA procedure requires that the population variances are equal. Previous research has investigated whether variance homogeneity tests, such as Levene’s test, are satisfactory as gatekeepers for identifying when to use or not to use the ANOVA procedure. This research focuses on a novel homogeneity of variance test that incorporates an equivalence testing approach. Instead of testing the null hypothesis that the variances are equal against an alternate hypothesis that the variances are not equal, the equivalence-based test evaluates the null hypothesis that the difference in the variances falls outside or on the border of a predetermined interval against an alternate hypothesis that the difference in the variances falls within the predetermined interval. Thus, with the equivalence based procedure, the alternate hypothesis is aligned with the research hypothesis (variance equality). A simulation study demonstrated that the equivalence-based test of population variance homogeneity is a better gatekeeper for the ANOVA than traditional homogeneity of variance tests.SSHR

    Using the errors-in-variables method in two-group pretest-posttest design.

    Get PDF
    Culpepper and Aguinis (2011) highlighted the benefit of using the errors-in-variables (EIV) method to control for measurement error and obtain unbiased regression estimates. The current study investigated the EIV method and compared it to change scores and analysis of covariance (ANCOVA) in a two group pretest-posttest design. Results indicated that the EIV method’s estimates were unbiased under many conditions, but the EIV method consistently demonstrated lower power than the change score method. An additional risk with using the EIV method is that one must enter the covariate reliability into the EIV model, and results highlighted that estimates are biased if a researcher chooses a value that differs from the true covariate reliability. Obtaining unbiased results also depended on sample size. Our conclusion is that there is no additional benefit to using the EIV method over change score or ANCOVA methods for comparing the amount of change in pretest-posttest designs
    • …
    corecore