6 research outputs found

    Nogo-A-deficient Transgenic Rats Show Deficits in Higher Cognitive Functions, Decreased Anxiety, and Altered Circadian Activity Patterns

    Get PDF
    Decreased levels of Nogo-A-dependent signaling have been shown to affect behavior and cognitive functions. In Nogo-A knockout and knockdown laboratory rodents, behavioral alterations were observed, possibly corresponding with human neuropsychiatric diseases of neurodevelopmental origin, particularly schizophrenia. This study offers further insight into behavioral manifestations of Nogo-A knockdown in laboratory rats, focusing on spatial and non-spatial cognition, anxiety levels, circadian rhythmicity, and activity patterns. Demonstrated is an impairment of cognitive functions and behavioral flexibility in a spatial active avoidance task, while non-spatial memory in a step-through avoidance task was spared. No signs of anhedonia, typical for schizophrenic patients, were observed in the animals. Some measures indicated lower anxiety levels in the Nogo-A-deficient group. Circadian rhythmicity in locomotor activity was preserved in the Nogo-A knockout rats and their circadian period (tau) did not differ from controls. However, daily activity patterns were slightly altered in the knockdown animals. We conclude that a reduction of Nogo-A levels induces changes in CNS development, manifested as subtle alterations in cognitive functions, emotionality, and activity patterns

    Deciding what to replicate:A decision model for replication study selection under resource and knowledge constraints

    Get PDF
    Robust scientific knowledge is contingent upon replication of original findings. However, replicating researchers are constrained by resources, and will almost always have to choose one replication effort to focus on from a set of potential candidates. To select a candidate efficiently in these cases, we need methods for deciding which out of all candidates considered would be the most useful to replicate, given some overall goal researchers wish to achieve. In this article we assume that the overall goal researchers wish to achieve is to maximize the utility gained by conducting the replication study. We then propose a general rule for study selection in replication research based on the replication value of the set of claims considered for replication. The replication value of a claim is defined as the maximum expected utility we could gain by conducting a replication of the claim, and is a function of (a) the value of being certain about the claim, and (b) uncertainty about the claim based on current evidence. We formalize this definition in terms of a causal decision model, utilizing concepts from decision theory and causal graph modeling. We discuss the validity of using replication value as a measure of expected utility gain, and we suggest approaches for deriving quantitative estimates of replication value. Our goal in this article is not to define concrete guidelines for study selection, but to provide the necessary theoretical foundations on which such concrete guidelines could be built. Translational Abstract Replication-redoing a study using the same procedures-is an important part of checking the robustness of claims in the psychological literature. The practice of replicating original studies has been woefully devalued for many years, but this is now changing. Recent calls for improving the quality of research in psychology has generated a surge of interest in funding, conducting, and publishing replication studies. Because many studies have never been replicated, and researchers have limited time and money to perform replication studies, researchers must decide which studies are the most important to replicate. This way scientists learn the most, given limited resources. In this article, we lay out what it means to think about what is the most important thing to replicate, and we propose a general decision rule for picking a study to replicate. That rule depends on a concept we call replication value. Replication value is a function of the importance of the study, and how uncertain we are about the findings. In this article we explain how researchers can think precisely about the value of replication studies. We then discuss when and how it makes sense to use replication value as a measure of how valuable a replication study would be, and we discuss factors that funders, journals, or scientists could consider when determining how valuable a replication study is

    Estimating the reproducibility of psychological science

    No full text
    Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams

    Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis

    No full text
    In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists’ gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for organizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed

    Theory building through replication response to commentaries on the "Many labs" replication project

    No full text
    Responds to the comments made by Monin and Oppenheimer (see record 2014-37961-001), Ferguson et al. (see record 2014-38072-001), Crisp et al. (see record 2014-38072-002), and Schwarz & Strack (see record 2014-38072-003) on the current authors original article (see record 2014-20922-002). The current authors thank the commentators for their productive discussion of the Many Labs project. They entirely agree with the main theme across the commentaries: direct replication does not guarantee that the same effect was tested. As noted by Nosek and Lakens (2014, p. 137), ‘‘direct replication is the attempt to duplicate the conditions and procedure that existing theory and evidence anticipate as necessary for obtaining the effect.’’ Attempting to do so does not guarantee success, but it does provide substantial opportunity for theoretical development building on empirical evidence
    corecore