71 research outputs found

    Announcing OpenCon 2016: catalyzing collective action for a more open scholarly system

    Get PDF
    Each year OpenCon brings together students and early career academic professionals from around the world to advance Open Access, Open Education and Open Data. Nick Shockey and Joseph McArthur announce here the next OpenCon dates. In addition, Chris Hartgerink takes a look back at OpenCon 2015 and reflects on how the conference became the catalyst for a variety of deliberate actions around scholarly communication

    Detection of data fabrication using statistical tools

    Get PDF
    Scientific misconduct potentially invalidates findings in many scientific fields. Improved detection of unethical practices like data fabrication is considered to deter such practices. In two studies, we investigated the diagnostic performance of various statistical methods to detect fabricated quantitative data from psychological research. In Study 1, we tested the validity of statistical methods to detect fabricated data at the study level using summary statistics. Using (arguably) genuine data from the Many Labs 1 project on the anchoring effect (k=36) and fabricated data for the same effect by our participants (k=39), we tested the validity of our newly proposed 'reversed Fisher method', variance analyses, and extreme effect sizes, and a combination of these three indicators using the original Fisher method. Results indicate that the variance analyses perform fairly well when the homogeneity of population variances is accounted for and that extreme effect sizes perform similarly well in distinguishing genuine from fabricated data. The performance of the 'reversed Fisher method' was poor and depended on the types of tests included. In Study 2, we tested the validity of statistical methods to detect fabricated data using raw data. Using (arguably) genuine data from the Many Labs 3 project on the classic Stroop task (k=21) and fabricated data for the same effect by our participants (k=28), we investigated the performance of digit analyses, variance analyses, multivariate associations, and extreme effect sizes, and a combination of these four methods using the original Fisher method. Results indicate that variance analyses, extreme effect sizes, and multivariate associations perform fairly well to excellent in detecting fabricated data using raw data, while digit analyses perform at chance levels. The two studies provide mixed results on how the use of random number generators affects the detection of data fabrication. Ultimately, we consider the variance analyses, effect sizes, and multivariate associations valuable tools to detect potential data anomalies in empirical (summary or raw) data. However, we argue against widespread (possible automatic) application of these tools, because some fabricated data may be irregular in one aspect but not in another. Considering how violations of the assumptions of fabrication detection methods may yield high false positive or false negative probabilities, we recommend comparing potentially fabricated data to genuine data on the same topic

    Postprint "Who believes in the storybook image of the scientist?" accepted for publication in Accountability in Research

    Get PDF
    Do lay people and scientists themselves recognize that scientists are human and therefore prone to human fallibilities such as error, bias, and even dishonesty? In a series of three experimental studies and one correlational study (total N = 3,278) we found that the ‘storybook image of the scientist’ is pervasive: American lay people and scientists from over 60 countries attributed considerably more objectivity, rationality, open-mindedness, intelligence, integrity, and communality to scientists than other highly-educated people. Moreover, scientists perceived even larger differences than lay people did. Some groups of scientists also differentiated between different categories of scientists: established scientists attributed higher levels of the scientific traits to established scientists than to early-career scientists and PhD students, and higher levels to PhD students than to early-career scientists. Female scientists attributed considerably higher levels of the scientific traits to female scientists than to male scientists. A strong belief in the storybook image and the (human) tendency to attribute higher levels of desirable traits to people in one’s own group than to people in other groups may decrease scientists’ willingness to adopt recently proposed practices to reduce error, bias and dishonesty in science

    The validity of the tool “statcheck” in discovering statistical reporting inconsistencies

    Get PDF
    The R package “statcheck” (Epskamp & Nuijten, 2016) is a tool to extract statistical results from articles and check whether the reported p-value matches the accompanying test statistic and degrees of freedom. A previous study showed high interrater reliabilities (between .76 and .89) between statcheck and manual coding of inconsistencies (.76 - .89; Nuijten, Hartgerink, Van Assen, Epskamp, & Wicherts, 2016). Here we present an additional, detailed study of the validity of statcheck. In Study 1, we calculated its sensitivity and specificity. We found that statcheck’s sensitivity (true positive rate) and specificity (true negative rate) were high: between 85.3% and 100%, and between 96.0% and 100%, respectively, depending on the assumptions and settings. The overall accuracy of statcheck ranged from 96.2% to 99.9%. In Study 2, we investigated statcheck’s ability to deal with statistical corrections for multiple testing or violations of assumptions in articles. We found that the prevalence of corrections for multiple testing or violations of assumptions in psychology was higher than we initially estimated in Nuijten et al. (2016). Although we found numerous reporting inconsistencies in results corrected for violations of the sphericity assumption, we demonstrate that inconsistencies associated with statistical corrections are not what is causing the high estimates of the prevalence of statistical reporting inconsistencies in psychology

    The academic, economic and societal impacts of Open Access: an evidence-based review

    Get PDF
    Ongoing debates surrounding Open Access to the scholarly literature are multifaceted and complicated by disparate and often polarised viewpoints from engaged stakeholders. At the current stage, Open Access has become such a global issue that it is critical for all involved in scholarly publishing, including policymakers, publishers, research funders, governments, learned societies, librarians, and academic communities, to be well-informed on the history, benefits, and pitfalls of Open Access. In spite of this, there is a general lack of consensus regarding the potential pros and cons of Open Access at multiple levels. This review aims to be a resource for current knowledge on the impacts of Open Access by synthesizing important research in three major areas: academic, economic and societal. While there is clearly much scope for additional research, several key trends are identified, including a broad citation advantage for researchers who publish openly, as well as additional benefits to the non-academic dissemination of their work. The economic impact of Open Access is less well-understood, although it is clear that access to the research literature is key for innovative enterprises, and a range of governmental and non-governmental services. Furthermore, Open Access has the potential to save both publishers and research funders considerable amounts of financial resources, and can provide some economic benefits to traditionally subscription-based journals. The societal impact of Open Access is strong, in particular for advancing citizen science initiatives, and leveling the playing field for researchers in developing countries. Open Access supersedes all potential alternative modes of access to the scholarly literature through enabling unrestricted re-use, and long-term stability independent of financial constraints of traditional publishers that impede knowledge sharing. However, Open Access has the potential to become unsustainable for research communities if high-cost options are allowed to continue to prevail in a widely unregulated scholarly publishing market. Open Access remains only one of the multiple challenges that the scholarly publishing system is currently facing. Yet, it provides one foundation for increasing engagement with researchers regarding ethical standards of publishing and the broader implications of 'Open Research'

    The Open Access Journals Toolkit

    Get PDF
    Contents: Getting Started 5 • Scope, aims and focus 5 • Choosing a title for your journal 6 • Types of content accepted 7 • Kick-off and ongoing funding 11 • Disciplinary considerations 16 • Journal setup checklist and timeline 18 • Running a journal 20 • Article selection criteria 20 • Publication frequency and journal issues 23 • Attracting authors 25 • Peer review and quality assurance 27 • The costs of running an online open access journal 31 • Running a journal in a local or regional language 34 • Flipping a journal to open access 36 • Indexing 38 • Building and maintaining a profile 38 • Journal and article indexing 41 • Search engine optimisation and technical improvements 43 • Journal and article level metrics 45 • Staffing 49 • Roles and responsibilities 49 • Recruiting journal staff 51 • Building an editorial board 54 • Training and staff development 57 • Policies 59 • Developing author guidelines 59 • Publication ethics and related editorial policies 61 • Compliance with funder policies and mandates 64 • Copyright and licensing 66 • Displaying licensing information 68 • Corrections and retractions 70 • Infrastructure 72 • Software and technical infrastructure 72 • Journal appearance and web design 74 • Article and journal metadata 76 • Structured content 79 • Persistent Identifiers 81 • About the Open Access Journals Toolkit 83 • About 83 • What is an open access journal? 86 • Frequently asked questions 89 • Glossary 92 • Further reading 9

    Do not trust science:Verify it

    No full text

    Do not trust science: Verify it

    No full text
    corecore