17 research outputs found

    Deciding what to replicate: a decision model for replication study selection under resource and knowledge constraints

    Get PDF
    Robust scientific knowledge is contingent upon replication of original findings. However, replicating researchers are constrained by resources, and will almost always have to choose one replication effort to focus on from a set of potential candidates. To select a candidate efficiently in these cases, we need methods for deciding which out of all candidates considered would be the most useful to replicate, given some overall goal researchers wish to achieve. In this article we assume that the overall goal researchers wish to achieve is to maximize the utility gained by conducting the replication study. We then propose a general rule for study selection in replication research based on the replication value of the set of claims considered for replication. The replication value of a claim is defined as the maximum expected utility we could gain by conducting a replication of the claim, and is a function of (a) the value of being certain about the claim, and (b) uncertainty about the claim based on current evidence. We formalize this definition in terms of a causal decision model, utilizing concepts from decision theory and causal graph modeling. We discuss the validity of using replication value as a measure of expected utility gain, and we suggest approaches for deriving quantitative estimates of replication value. Our goal in this article is not to define concrete guidelines for study selection, but to provide the necessary theoretical foundations on which such concrete guidelines could be built.Translational Abstract Replication-redoing a study using the same procedures-is an important part of checking the robustness of claims in the psychological literature. The practice of replicating original studies has been woefully devalued for many years, but this is now changing. Recent calls for improving the quality of research in psychology has generated a surge of interest in funding, conducting, and publishing replication studies. Because many studies have never been replicated, and researchers have limited time and money to perform replication studies, researchers must decide which studies are the most important to replicate. This way scientists learn the most, given limited resources. In this article, we lay out what it means to think about what is the most important thing to replicate, and we propose a general decision rule for picking a study to replicate. That rule depends on a concept we call replication value. Replication value is a function of the importance of the study, and how uncertain we are about the findings. In this article we explain how researchers can think precisely about the value of replication studies. We then discuss when and how it makes sense to use replication value as a measure of how valuable a replication study would be, and we discuss factors that funders, journals, or scientists could consider when determining how valuable a replication study is.Multivariate analysis of psychological dat

    Many analysts, one data set: making transparent how variations in analytic choices affect results

    Get PDF
    Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results

    Response to Comment on “Estimating the reproducibility of psychological science”

    Get PDF
    Gilbert et al. conclude that evidence from the Open Science Collaboration's Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.status: publishe

    Anchoring: accessibility as a cause of judgmental assimilation

    No full text
    Anchoring denotes assimilation of judgment toward a previously considered value — an anchor. The selective accessibility model argues that anchoring is a result of selective accessibility of information compatible with an anchor. The present review shows the similarities between anchoring and knowledge accessibility effects. Both effects depend on the applicability of the accessible information, which is also used similarly. Furthermore, both knowledge accessibility and anchoring influence the time needed for the judgment and both display temporal robustness. Finally, we offer recent evidence for the selective accessibility model and demonstrate how the model can be applied to reducing the anchoring effect

    Anchoring effect

    No full text
    An assimilation of an estimate towards a previously considered standard is defined as judgmental anchoring. Anchoring constitutes a ubiquitous phenomenon that occurs in a variety of laboratory and real-world settings. Anchoring effects are remarkably robust. They may occur even if the anchor values are clearly uninformative or implausibly extreme, are sometimes independent of participants’ motivation and expertise, and may persist over long periods of time. Different underlying mechanisms may contribute to the generation of anchoring effects. Specifically, anchoring may result from insufficient adjustment, from the use of conversational inferences, from selective accessibility of information consistent with an anchor, or from the distortion of a response scale

    Investigating variation in replicability: A "Many Labs" replication project

    Get PDF
    Contains fulltext : 131506.pdf (publisher's version ) (Open Access)Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects - flag priming influencing conservatism and currency priming influencing system justification - did not replicate. We compared whether the conditions such as lab versus online or US versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect.11 p

    Theory building through replication response to commentaries on the "Many Labs" replication project

    No full text
    Item does not contain fulltextResponds to the comments made by Monin and Oppenheimer (see record 2014-37961-001), Ferguson et al. (see record 2014-38072-001), Crisp et al. (see record 2014-38072-002), and Schwarz & Strack (see record 2014-38072-003) on the current authors original article (see record 2014-20922-002). The current authors thank the commentators for their productive discussion of the Many Labs project. They entirely agree with the main theme across the commentaries: direct replication does not guarantee that the same effect was tested. As noted by Nosek and Lakens (2014, p. 137), "direct replication is the attempt to duplicate the conditions and procedure that existing theory and evidence anticipate as necessary for obtaining the effect." Attempting to do so does not guarantee success, but it does provide substantial opportunity for theoretical development building on empirical evidence.4 p

    Seven steps toward more transparency in statistical practice

    No full text
    We argue that statistical practice in the social and behavioural sciences benefits from transparency, a fair acknowledgement of uncertainty and openness to alternative interpretations. Here, to promote such a practice, we recommend seven concrete statistical procedures: (1) visualizing data; (2) quantifying inferential uncertainty; (3) assessing data preprocessing choices; (4) reporting multiple models; (5) involving multiple analysts; (6) interpreting results modestly; and (7) sharing data and code. We discuss their benefits and limitations, and provide guidelines for adoption. Each of the seven procedures finds inspiration in Merton's ethos of science as reflected in the norms of communalism, universalism, disinterestedness and organized scepticism. We believe that these ethical considerations-as well as their statistical consequences-establish common ground among data analysts, despite continuing disagreements about the foundations of statistical inference.Wagenmakers and colleagues describe seven statistical procedures that increase transparency in data analysis. These procedures highlight common ground among data analysts from different schools and find inspiration in Merton's ethos of science

    Registered Replication Report: Study 1 From Finkel, Rusbult, Kumashiro, & Hannon (2002)

    Get PDF
    Finkel, Rusbult, Kumashiro, and Hannon (2002, Study 1) demonstrated a causal link between subjective commitment to a relationship and how people responded to hypothetical betrayals of that relationship. Participants primed to think about their commitment to their partner (high commitment) reacted to the betrayals with reduced exit and neglect responses relative to those primed to think about their independence from their partner (low commitment). The priming manipulation did not affect constructive voice and loyalty responses. Although other studies have demonstrated a correlation between subjective commitment and responses to betrayal, this study provides the only experimental evidence that inducing changes to subjective commitment can causally affect forgiveness responses. This Registered Replication Report (RRR) meta-analytically combines the results of 16 new direct replications of the original study, all of which followed a standardized, vetted, and preregistered protocol. The results showed little effect of the priming manipulation on the forgiveness outcome measures, but it also did not observe an effect of priming on subjective commitment, so the manipulation did not work as it had in the original study. We discuss possible explanations for the discrepancy between the findings from this RRR and the original study
    corecore