1,547 research outputs found

    Can we disregard the whole model? Omnibus non-inferiority testing for R2R^{2} in multivariable linear regression and η^2\hat{\eta}^{2} in ANOVA

    Full text link
    Determining a lack of association between an outcome variable and a number of different explanatory variables is frequently necessary in order to disregard a proposed model (i.e., to confirm the lack of an association between an outcome and predictors). Despite this, the literature rarely offers information about, or technical recommendations concerning, the appropriate statistical methodology to be used to accomplish this task. This paper introduces non-inferiority tests for ANOVA and linear regression analyses, that correspond to the standard widely used FF-test for η^2\hat{\eta}^2 and R2R^{2}, respectively. A simulation study is conducted to examine the type I error rates and statistical power of the tests, and a comparison is made with an alternative Bayesian testing approach. The results indicate that the proposed non-inferiority test is a potentially useful tool for 'testing the null.'Comment: 30 pages, 6 figure

    The epistemic and pragmatic function of dichotomous claims based on statistical hypothesis tests

    Get PDF
    Researchers commonly make dichotomous claims based on continuous test statistics. Many have branded the practice as a misuse of statistics and criticize scientists for the widespread application of hypothesis tests to tentatively reject a hypothesis (or not) depending on whether a p-value is below or above an alpha level. Although dichotomous claims are rarely explicitly defended, we argue they play an important epistemological and pragmatic role in science. The epistemological function of dichotomous claims consists in transforming data into quasibasic statements, which are tentatively accepted singular facts that can corroborate or falsify theoretical claims. This transformation requires a prespecified methodological decision procedure such as Neyman-Pearson hypothesis tests. From the perspective of methodological falsificationism these decision procedures are necessary, as probabilistic statements (e.g., continuous test statistics) cannot function as falsifiers of substantive hypotheses. The pragmatic function of dichotomous claims is to facilitate scrutiny and criticism among peers by generating contestable claims, a process referred to by Popper as “conjectures and refutations.” We speculate about how the surprisingly widespread use of a 5% alpha level might have facilitated this pragmatic function. Abandoning dichotomous claims, for example because researchers commonly misuse p-values, would sacrifice their crucial epistemic and pragmatic functions.</p

    Why indirect contributions matter for science and scientists (i)

    Get PDF
    The contributions of science and research to society are typically made intelligible by measuring direct individual contributions, such as number of journal articles published, journal impact factor, or grant funding acquired. Leo Tiokhin, Karthik Panchanathan, Paul Smaldino and Daniel Lakens argue that this focus on direct individual contributions obscures the significant indirect contributions that scientists make to research as a collective undertaking. In the first of two posts they outline why indirect contributions should be given more weight in research assessment

    Wetenswaardige wetenschapsjournalistiek

    Get PDF

    Improving inferences about null effects with Bayes factors and equivalence tests

    Get PDF
    Researchers often conclude an effect is absent when a null-hypothesis significance test yields a non-significant p-value. However, it is neither logically nor statistically correct to conclude an effect is absent when a hypothesis test is not significant. We present two methods to evaluate the presence or absence of effects: Equivalence testing (based on frequentist statistics) and Bayes factors (based on Bayesian statistics). In four examples from the gerontology literature we illustrate different ways to specify alternative models that can be used to reject the presence of a meaningful or predicted effect in hypothesis tests. We provide detailed explanations of how to calculate, report, and interpret Bayes factors and equivalence tests. We also discuss how to design informative studies that can provide support for a null model or for the absence of a meaningful effect. The conceptual differences between Bayes factors and equivalence tests are discussed, and we also note when and why they might lead to similar or different inferences in practice. It is important that researchers are able to falsify predictions or can quantify the support for predicted null-effects. Bayes factors and equivalence tests provide useful statistical tools to improve inferences about null effects

    Academic Research Values: Conceptualization and Initial Steps of Measure Development

    Get PDF
    In this paper we draw on value theory in social psychology to conceptualize the range of motives that may influence research-related attitudes, decisions, and actions of researchers. To conceptualize academic research values, we integrate theoretical insights from the personal, work, and scientific work values literature, as well as the responses of 6 interviewees and 255 survey participants about values relevant to academic research. Finally, we propose a total of 246 academic research value items spread over 11 dimensions and 36 sub-themes. We relate our conceptualization and item proposals to existing work and provide recommendations for future measurement development. Gaining a better understanding of the different values researchers have, is useful to improve scientific careers, make science attractive to a more diverse group of individuals, and elucidate some of the mechanisms leading to exemplary and questionable science
    corecore