20 research outputs found
Psychology’s Reform Movement Needs a Reconceptualization of Scientific Expertise
Science is supposed to be a self-correcting endeavor, but who is "the scientific expert" that corrects faulty science? We grouped traditional conceptualizations of expertise in psychology under three classes (substantialist, implicitist, and social conventionalist), and then examined how these approaches affect scientific self-correction in reference to various components of the credibility crisis such as fraud/QRPs, the inadequate number of replication studies, challenges facing big team science, and perverse incentives. Our investigation pointed out several problems with the traditional views. First, traditional views conceptualize expertise as something possessed, not performed, ignoring the epistemic responsibility of experts. Second, expertise is conceived as an exclusively individual quality, which contradicts the socially distributed nature of scientific inquiry. Third, some aspects of expertise are taken to be implicit or relative to the established research practices in a field, which leads to disputes over replicability and makes it difficult to criticize mindless scientific rituals. Lastly, a conflation of expertise with eminence in practice creates an incentive structure that undermines the goal of self-correction in science. We suggest, instead, that we conceive an expert as a reliable informant. Following the extended virtue account of expertise, we propose a non-individualist and a performance-based model, and discuss why it does not suffer from the same problems as traditional approaches, and why it is more compatible with the reform movement's goal of creating a credible psychological science through self-correction.</p
A Falsificationist Treatment of Auxiliary Hypotheses in Social and Behavioral Sciences:Systematic Replications Framework
Auxiliary hypotheses (AHs) are indispensable in hypothesis-testing, because without them specification of testable predictions and consequently falsification is impossible. However, as AHs enter the test along with the main hypothesis, non-corroborative findings are ambiguous. Due to this ambiguity, AHs may also be employed to deflect falsification by providing “alternative explanations” of findings. This is not fatal to the extent that AHs are independently validated and safely relegated to background knowledge. But this is not always possible, especially in the so-called “softer” sciences where often theories are loosely organized, measurements are noisy, and constructs are vague. The Systematic Replications Framework (SRF) provides a methodological solution by disentangling the implications of the findings for the main hypothesis and the AHs through pre-planned series of systematically interlinked close and conceptual replications. SRF facilitates testing alternative explanations associated with different AHs and thereby increases test severity across a battery of tests. In this way, SRF assesses whether the corroboration of a hypothesis is conditional on particular AHs, and thus allows for a more objective evaluation of its empirical support and whether post hoc modifications to the theory are progressive or degenerative in the Lakatosian sense. Finally, SRF has several advantages over randomization-based systematic replication proposals, which generally assume a problematic neo-operationalist approach that prescribes exploration-oriented strategies in confirmatory contexts
The epistemic and pragmatic function of dichotomous claims based on statistical hypothesis tests
Researchers commonly make dichotomous claims based on continuous test statistics. Many have branded the practice as a misuse of statistics and criticize scientists for the widespread application of hypothesis tests to tentatively reject a hypothesis (or not) depending on whether a p-value is below or above an alpha level. Although dichotomous claims are rarely explicitly defended, we argue they play an important epistemological and pragmatic role in science. The epistemological function of dichotomous claims consists in transforming data into quasibasic statements, which are tentatively accepted singular facts that can corroborate or falsify theoretical claims. This transformation requires a prespecified methodological decision procedure such as Neyman-Pearson hypothesis tests. From the perspective of methodological falsificationism these decision procedures are necessary, as probabilistic statements (e.g., continuous test statistics) cannot function as falsifiers of substantive hypotheses. The pragmatic function of dichotomous claims is to facilitate scrutiny and criticism among peers by generating contestable claims, a process referred to by Popper as “conjectures and refutations.” We speculate about how the surprisingly widespread use of a 5% alpha level might have facilitated this pragmatic function. Abandoning dichotomous claims, for example because researchers commonly misuse p-values, would sacrifice their crucial epistemic and pragmatic functions.</p
Psychology’s Reform Movement Needs a Reconceptualization of Scientific Expertise
Science is supposed to be a self-correcting endeavor, but who is “the scientific expert” that corrects faulty science? We grouped traditional conceptualizations of expertise in psychology under three classes (substantialist, implicitist, and social conventionalist), and then examined how these approaches undermine scientific self-correction in reference to various components of the credibility crisis such as fraud/QRP’s, the inadequate number of replication studies, challenges facing big team science, and perverse incentives. Our investigation pointed out several problems with the traditional views. First, traditional views conceptualize expertise as something possessed, not performed, ignoring the epistemic responsibility of experts. Second, expertise is conceived as an exclusively individual quality, which contradicts the socially distributed nature of science. Third, some aspects of expertise are taken to be implicit or relative to the established research practices in a field, which leads to disputes over replicability and makes it difficult to criticize mindless scientific rituals. Lastly, a conflation of expertise with eminence in practice creates an incentive structure that undermines the goal of self-correction in science. We suggest, instead, that we conceive an expert as a reliable informant. Following the extended virtue account of expertise, we propose a non-individualist and a performance-based model, and discuss why it does not suffer from the same problems as the traditional approaches, and why it is more compatible with the reform movement's goal of creating a credible psychological science through self-correction
A Falsificationist Treatment of Auxiliary Hypotheses in Social and Behavioral Sciences: Systematic Replications Framework
Auxiliary hypotheses (AHs) are indispensable in hypothesis-testing, because without them specification of testable predictions and consequently falsification is impossible. However, as AHs enter the test along with the main hypothesis, non-corroborative findings are ambiguous. Due to this ambiguity, AHs may also be employed to deflect falsification by providing “alternative explanations” of findings. This is not fatal to the extent that AHs are independently validated and safely relegated to background knowledge. But this is not always possible, especially in the so-called “softer” sciences where often theories are loosely organized, measurements are noisy, and constructs are vague. The Systematic Replications Framework (SRF) provides a methodological solution by disentangling the implications of the findings for the main hypothesis and the AHs through pre-planned series of systematically interlinked close and conceptual replications. SRF facilitates testing alternative explanations associated with different AHs and thereby increases test severity across a battery of tests. In this way, SRF assesses whether the corroboration of a hypothesis is conditional on particular AHs, and thus allows for a more objective evaluation of its empirical support and whether post hoc modifications to the theory are progressive or degenerative in the Lakatosian sense. Finally, SRF has several advantages over randomization-based systematic replication proposals, which generally assume a problematic neo-operationalist approach that prescribes exploration-oriented strategies in confirmatory contexts
There is no generalizability crisis
Falsificationist and confirmationist approaches provide two well-established ways of evaluating generalizability. Yarkoni rejects both and invents a third approach we call neo-operationalism. His proposal cannot work for the hypothetical concepts psychologists use, because the universe of operationalizations is impossible to define, and hypothetical concepts cannot be reduced to their operationalizations. We conclude that he is wrong in his generalizability-crisis diagnosis
There is no generalizability crisis
Falsificationist and confirmationist approaches provide two well-established ways of evaluating generalizability. Yarkoni rejects both and invents a third approach we call neo-operationalism. His proposal cannot work for the hypothetical concepts psychologists use, because the universe of operationalizations is impossible to define, and hypothetical concepts cannot be reduced to their operationalizations. We conclude that he is wrong in his generalizability-crisis diagnosis
Is Open Science Neoliberal?
The scientific reform movement, which is frequently referred to as open science, has the potential to substantially reshape how science is done, and for this reason, its socio-political antecedents and consequences deserve serious scholarly attention. In a recently formed literature that professes to meet this need, it has been widely argued that the movement is neoliberal. However, for two reasons it is hard to justify this wide-scale attribution: 1) the critics mistakenly attribute the movement a monolithic structure, and 2) the critics' arguments associating the movement with neoliberalism are highly questionable. In particular, critics too hastily associate the movement’s preferential focus on methodological issues and its underlying philosophy of science with neoliberalism, and their allegations regarding the pro-market proclivities of the reform movement do not hold under closer scrutiny. What is needed are more nuanced accounts of the socio-political underpinnings of scientific reform that show more respect to the complexity of the subject matter. To address this need, we propose a meta-model for the analysis of reform proposals, which represents methodology, axiology, science policy, and ideology as interconnected but relatively distinct domains, and allows for recognizing the divergent tendencies in the movement