39 research outputs found

    Registered reports: an early example and analysis

    Get PDF
    © 2019 Wiseman et al.The recent ‘replication crisis’ in psychology has focused attention on ways of increasing methodological rigor within the behavioral sciences. Part of this work has involved promoting ‘Registered Reports’, wherein journals peer review papers prior to data collection and publication. Although this approach is usually seen as a relatively recent development, we note that a prototype of this publishing model was initiated in the mid-1970s by parapsychologist Martin Johnson in the European Journal of Parapsychology (EJP). A retrospective and observational comparison of Registered and non-Registered Reports published in the EJP during a seventeen-year period provides circumstantial evidence to suggest that the approach helped to reduce questionable research practices. This paper aims both to bring Johnson’s pioneering work to a wider audience, and to investigate the positive role that Registered Reports may play in helping to promote higher methodological and statistical standards.Peer reviewe

    Charts 1 [CHANGED]

    No full text
    <p>This is an example</p

    LawArXiv : An open access community for legal scholarship

    No full text
    The LawArXiv repository was developed jointly by the Legal Information Preservation Alliance, the Mid-American Law Library Consortium, NELLCO, and the Cornell Law Library, with the Center for Open Science (COS) providing the technological infrastructure via its Open Source Framework. The COS platform also serves as a preprint service, allowing organizations to control their branding, licensing requirements and taxonomy. LawArXiv will accept preprints and post prints where the author has the copyright on their work

    Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency

    No full text
    Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories

    Ensuring the quality and specificity of preregistrations

    No full text
    Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of “researcher degrees of freedom” aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called “OSF Preregistration,” http://osf.io/prereg/). The Prereg Challenge format was a “structured” workflow with detailed instructions and an independent review to confirm completeness; the “Standard” format was “unstructured” with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the “structured” format restricted the opportunistic use of researcher degrees of freedom better (Cliff’s Delta = 0.49) than the “unstructured” format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research. Data Availability: The data and materials for this study are available at https://osf.io/fgc9k/ and the study was preregistered and is available at https://osf.io/k94ve/. An earlier version of this manuscript appeared as Chapter 6 of the preprint of the doctoral thesis of the first author (DOI 10.31234/osf.io/g8cjq). The preprint of the current version of this manuscript (DOI 10.31234/osf.io/cdgyh) is available at https://psyarxiv.com/cdgyh

    Ensuring the quality and specificity of preregistrations

    No full text
    Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of “researcher degrees of freedom” aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called “OSF Preregistration,” http://osf.io/prereg/). The Prereg Challenge format was a “structured” workflow with detailed instructions and an independent review to confirm completeness; the “Standard” format was “unstructured” with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the “structured” format restricted the opportunistic use of researcher degrees of freedom better (Cliff’s Delta = 0.49) than the “unstructured” format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research. Data Availability: The data and materials for this study are available at https://osf.io/fgc9k/ and the study was preregistered and is available at https://osf.io/k94ve/. An earlier version of this manuscript appeared as Chapter 6 of the preprint of the doctoral thesis of the first author (DOI 10.31234/osf.io/g8cjq). The preprint of the current version of this manuscript (DOI 10.31234/osf.io/cdgyh) is available at https://psyarxiv.com/cdgyh

    Metadata record for: Aachen-Heerlen annotated steel microstructure dataset

    No full text
    This dataset contains key characteristics about the data described in the Data Descriptor Aachen-Heerlen annotated steel microstructure dataset. Contents: 1. human readable metadata summary table in CSV format 2. machine readable metadata file in JSON forma

    Aachen-Heerlen Annotated Steel Microstructure Dataset

    No full text
    This dataset consists of TIFF and PNG microscopy images of matrensite-austenite structures on steel surfaces. Accompanying files include expert annotations of these structures that are associated with the images in the dataset. Specifically, steel specimen metadata, expert point-of-interest (POI) coordinates, expert polygon drawings around martensite-austenite structures are included. Additionally, computed morphological characteristics are also provided. CSV files include POI and polygon coordinates associated with each image. pickle files can be loaded with Python as Pandas Dataframes and easily visualized and processed. Accompanying software code can be found at GitHub https://doi.org/10.5281/zenodo.407555
    corecore