61 research outputs found

    Striving for transparent and credible research: practical guidelines for behavioral ecologists

    Get PDF
    Science is meant to be the systematic and objective study of the world but evidence suggests that scientific practices are sometimes falling short of this expectation. In this invited idea, we argue that any failure to conduct research according to a documented plan (lack of reliability) and/or any failure to ensure that reconducting the same project would provide the same finding (lack of reproducibility), will result in a low probability of independent studies reaching the same outcome (lack of replicability). After outlining the challenges facing behavioral ecology and science more broadly and incorporating advice from international organizations such as the Center for Open Science (COS), we present clear guidelines and tutorials on what we think open practices represent for behavioral ecologists. In addition, we indicate some of the currently most appropriate and freely available tools for adopting these practices. Finally, we suggest that all journals in our field, such as Behavioral Ecology, give additional weight to transparent studies and therefore provide greater incentives to align our scientific practices to our scientific values. Overall, we argue that producing demonstrably credible science is now fully achievable for the benefit of each researcher individually and for our community as a whole

    Toward Greater Reproducibility of Undergraduate Behavioral Science Research

    Get PDF
    Reproducibility crises have arisen in psychology and other behavioral sciences, spurring efforts to ensure research findings are credible and replicable. Although reforms are occurring at professional levels in terms of new publication parameters and open science initiatives, the credibility and reproducibility of undergraduate research deserves attention. Undergraduate behavioral science research projects that rely on small convenience samples of participants, overuse hypothesis testing for drawing meaning from data, and engage in opaque statistical computing are vulnerable to producing nonreproducible findings. These vulnerabilities are reviewed, and practical recommendations for improving the credibility and reproducibility of undergraduate behavioral science research are offered

    Suppression weakens unwanted memories via a sustained reduction of neural reactivation

    Get PDF
    Aversive events sometimes turn into intrusive memories. However, prior evidence indicates that such memories can be controlled via a mechanism of retrieval suppression. Here, we test the hypothesis that suppression exerts a sustained influence on memories by deteriorating their neural representations. This deterioration, in turn, would hinder their subsequent reactivation and thus impoverish the vividness with which they can be recalled. In an fMRI study, participants repeatedly suppressed memories of aversive scenes. As predicted, this process rendered the memories less vivid. Using a pattern classifier, we observed that suppression diminished the neural reactivation of scene information both globally across the brain and locally in the parahippocampal cortices. Moreover, the decline in vividness was associated with reduced reinstatement of unique memory representations in right parahippocampal cortex. These results support the hypothesis that suppression weakens memories by causing a sustained reduction in the potential to reactivate their neural representations

    Transparent, Open, and Reproducible Prevention Science

    Get PDF
    The field of prevention science aims to understand societal problems, identify effective interventions, and translate scientific evidence into policy and practice. There is growing interest among prevention scientists in the potential for transparency, openness, and reproducibility to facilitate this mission by providing opportunities to align scientific practice with scientific ideals, accelerate scientific discovery, and broaden access to scientific knowledge. The overarching goal of this manuscript is to serve as a primer introducing and providing an overview of open science for prevention researchers. In this paper, we discuss factors motivating interest in transparency and reproducibility, research practices associated with open science, and stakeholders engaged in and impacted by open science reform efforts. In addition, we discuss how and why different types of prevention research could incorporate open science practices, as well as ways that prevention science tools and methods could be leveraged to advance the wider open science movement. To promote further discussion, we conclude with potential reservations and challenges for the field of prevention science to address as it transitions to greater transparency, openness, and reproducibility. Throughout, we identify activities that aim to strengthen the reliability and efficiency of prevention science, facilitate access to its products and outputs, and promote collaborative and inclusive participation in research activities. By embracing principles of transparency, openness, and reproducibility, prevention science can better achieve its mission to advance evidence-based solutions to promote individual and collective well-being

    Software engineering principles to improve quality and performance of R software

    Get PDF
    Today’s computational researchers are expected to be highly proficient in using software to solve a wide range of problems ranging from processing large datasets to developing personalized treatment strategies from a growing range of options. Researchers are well versed in their own field, but may lack formal training and appropriate mentorship in software engineering principles. Two major themes not covered in most university coursework nor current literature are software testing and software optimization. Through a survey of all currently available Comprehensive R Archive Network packages, we show that reproducible and replicable software tests are frequently not available and that many packages do not appear to employ software performance and optimization tools and techniques. Through use of examples from an existing R package, we demonstrate powerful testing and optimization techniques that can improve the quality of any researcher’s software

    System Dynamics of Cognitive Vulnerabilities and Family Support Among Latina Children and Adolescents

    Get PDF
    The paper describes an approach to developing a data-driven development of a feedback theory of cognitive vulnerabilities and family support focused on understanding the dynamics experienced among Latina children, adolescents, and families. Family support is understood to be a response to avoidant and maladaptive behaviors that may be characteristic of cognitive vulnerabilities commonly associated depression and suicidal ideation. A formal feedback theory is developed, appraised, and analyzed using a combination of secondary analysis of qualitative interviews (N = 30) and quantitative analysis using system dynamics modeling and simulation. Implications for prevention practice, treatment, and future research are discussed

    From the Wet Lab to the Web Lab: A Paradigm Shift in Brain Imaging Research

    Get PDF
    Web technology has transformed our lives, and has led to a paradigm shift in the computational sciences. As the neuroimaging informatics research community amasses large datasets to answer complex neuroscience questions, we find that the web is the best medium to facilitate novel insights by way of improved collaboration and communication. Here, we review the landscape of web technologies used in neuroimaging research, and discuss future applications, areas for improvement, and the limitations of using web technology in research. Fully incorporating web technology in our research lifecycle requires not only technical skill, but a widespread culture change; a shift from the small, focused “wet lab” to a multidisciplinary and largely collaborative “web lab.

    Improving Science That Uses Code

    Get PDF
    As code is now an inextricable part of science it should be supported by competent Software Engineering, analogously to statistical claims being properly supported by competent statistics.If and when code avoids adequate scrutiny, science becomes unreliable and unverifiable because results — text, data, graphs, images, etc — depend on untrustworthy code.Currently, scientists rarely assure the quality of the code they rely on, and rarely make it accessible for scrutiny. Even when available, scientists rarely provide adequate documentation to understand or use it reliably.This paper proposes and justifies ways to improve science using code:1. Professional Software Engineers can help, particularly in critical fields such as public health, climate change and energy.2. ‘Software Engineering Boards,’ analogous to Ethics or Institutional Review Boards, should be instigated and used.3. The Reproducible Analytic Pipeline (RAP) methodology can be generalized to cover code and Software Engineering methodologies, in a generalization this paper introduces called RAP+. RAP+ (or comparable interventions) could be supported and or even required in journal, conference and funding body policies.The paper’s Supplemental Material provides a summary of Software Engineering best practice relevant to scientific research, including further suggestions for RAP+ workflows.‘Science is what we understand well enough to explain to a computer.’ Donald E. Knuth in A=B [ 1]‘I have to write to discover what I am doing.’ Flannery O’Connor, quoted in Write for your life [ 2]‘Criticism is the mother of methodology.’ Robert P. Abelson in Statistics as Principled Argument [ 3]‘From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue.’ Editorial in Nature [4
    • 

    corecore