11 research outputs found

    Exploration of reproducibility issues in scientometric research Part 1: Direct reproducibility

    No full text
    This is the first part of a small-scale explorative study in an effort to start assessing reproducibility issues specific to scientometrics research. This effort is motivated by the desire to generate empirical data to inform debates about reproducibility in scientometrics. Rather than attempt to reproduce studies, we explore how we might assess "in principle" reproducibility based on a critical review of the content of published papers. The first part of the study focuses on direct reproducibility - that is the ability to reproduce the specific evidence produced by an original study using the same data, methods, and procedures. The second part (Velden et al. 2018) is dedicated to conceptual reproducibility - that is the robustness of knowledge claims towards verification by an alternative approach using different data, methods and procedures. The study is exploratory: it investigates only a very limited number of publications and serves us to develop instruments for identifying potential reproducibility issues of published studies: These are a categorization of study types and a taxonomy of threats to reproducibility. We work with a select sample of five publications in scientometrics covering a variation of study types of theoretical, methodological, and empirical nature. Based on observations made during our exploratory review, we conclude this paper with open questions on how to approach and assess the status of direct reproducibility in scientometrics, intended for discussion at the special track on "Reproducibility in Scientometrics" at STI2018 in Leiden

    Exploration of Reproducibility Issues in Scientometric Research Part 2: Conceptual Reproducibility

    No full text
    This is the second part of a small-scale explorative study in an effort to assess reproducibility issues specific to scientometrics research. This effort is motivated by the desire to generate empirical data to inform debates about reproducibility in scientometrics. Rather than attempt to reproduce studies, we explore how we might assess "in principle" reproducibility based on a critical review of the content of published papers. While the first part of the study (Waltman et al. 2018) focuses on direct reproducibility - that is the ability to reproduce the specific evidence produced by an original study using the same data, methods, and procedures, this second part is dedicated to conceptual reproducibility - that is the robustness of knowledge claims towards verification by an alternative approach using different data, methods and procedures. The study is exploratory: it investigates only a very limited number of publications and serves us to develop instruments for identifying potential reproducibility issues of published studies: These are a categorization of study types and a taxonomy of threats to reproducibility. We work with a select sample of five publications in scientometrics covering a variation of study types of theoretical, methodological, and empirical nature. Based on observations made during our exploratory review, we conclude with open questions on how to approach and assess the status of conceptual reproducibility in scientometrics intended for discussion at the special track on "Reproducibility in Scientometrics" at STI2018 in Leiden

    Exploration of reproducibility issues in scientometric research

    No full text
    The (lack of) reproducibility of published research results has recently come under close scrutiny in some fields of science (see e.g. Flier 2017 for a discussion of bio-sciences, and e.g. Open Science Collaboration 2015 and Pashler & Harris 2012 for an assessment of the situation in psychology). Aside from genuine error or fraud as sources for the irreproducibility of published research, theoretical investigations (e.g. Ioannidis 2005) and empirical investigations (e.g. John et al. 2012) identify the use of questionable research methods, the overselling of results by overstating claims, and publication bias - the tendency to select positive results over negative results for publication - as further sources for the irreproducibility of results
    corecore