1,484 research outputs found

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table

    Scholarly Metrics Baseline: A Survey of Faculty Knowledge, Use, and Opinion About Scholarly Metrics

    Get PDF
    This article presents the results of a faculty survey conducted at the University of Vermont during academic year 2014-2015. The survey asked faculty about: familiarity with scholarly metrics, metric seeking habits, help seeking habits, and the role of metrics in their department’s tenure and promotion process. The survey also gathered faculty opinions on how well scholarly metrics reflect the importance of scholarly work and how faculty feel about administrators gathering institutional scholarly metric information. Results point to the necessity of understanding the campus landscape of faculty knowledge, opinion, importance, and use of scholarly metrics before engaging faculty in further discussions about quantifying the impact of their scholarly work

    The citation merit of scientific publications

    Get PDF
    We propose a new method to assess the merit of any set of scientific papers in a given field based on the citations they receive. Given a citation indicator, such as the mean citation or the h-index, we identify the merit of a given set of n articles with the probability that a randomly drawn sample of n articles from a reference set of articles in that field presents a lower citation index. The method allows for comparisons between research units of different sizes and fields. Using a dataset acquired from Thomson Scientific that contains the articles published in the periodical literature in the period 1998-2007, we show that the novel approach yields rankings of research units different from those obtained by a direct application of the mean citation or the h-index.Citation analysis, Citation merit, Mean citation, h-index

    Exploring a prototype framework of web-based and peer-reviewed “European Educational Research Quality Indicators” (EERQI)

    Get PDF
    Digitization, the Internet, and information or webometric interdisciplinary approaches are affecting the fields of Scientometrics and Library and Information Science (LIS). These new approaches can be used to improve citation-only procedures to estimate the quality and impact of research. A European pilot to explore this potential was called “European Educational Research Quality Indicators” (EERQI, FP7 # 217549). An interdisciplinary consortium was involved from 2008-2011. Different types of indicators were developed to score 171 educational research documents. Extrinsic bibliometric and citation indicators were collected from the Internet for each document; intrinsic indicators reflecting content-based quality were developed and relevant data gathered by peer review. Exploratory and confirmatory factor analysis and structural modeling were used to explore statistical relationships among latent factors or concepts and their indicators. Three intrinsic and two extrinsic latent factors were found to be relevant. Moreover, the more a document was related to a reviewer’s own area of research, the higher the score the reviewer gave concerning 1) significance, originality, and consistency, and 2) methodological adequacy. The conclusions are that a prototype EERQI framework has been constructed: intrinsic quality indicators add specific information to extrinsic quality or impact indicators, and vice versa. Also, a problem of “objective” impact scores is that they are based on “subjective” or biased peer-review scores. Peer-review, which is foundational to having a work cited, seems biased and this bias should be controlled or improved by more refined estimates of quality and impact of research. Some suggestions are given and limitations of the pilot are discussed. As the EERQI development approach, instruments, and tools are new, they should be developed further

    The citation merit of scientific publications

    Get PDF
    We propose a new method to assess the merit of any set of scientific papers in a given field based on the citations they receive. Given a citation indicator, such as the mean citation or the h-index, we identify the merit of a given set of n articles with the probability that a randomly drawn sample of n articles from a reference set of articles in that field presents a lower citation index. The method allows for comparisons between research units of different sizes and fields. Using a dataset acquired from Thomson Scientific that contains the articles published in the periodical literature in the period 1998-2007, we show that the novel approach yields rankings of research units different from those obtained by a direct application of the mean citation or the h-index.Ruiz-Castillo acknowledges financial help from the Spanish MEC through grant SEJ2007-67436. Crespo and Ortuño-Ortín also acknowledge financial help from the Spanish MEC through grant ECO2010-19596. This paper is produced as part of the project Science, Innovation, Firms and markets in a Globalised World (SCIFI-GLOW), a Collaborative Project funded by the European Commission's Seventh Research Framework Programme, Contract number SSH7-CT-2008-217436

    The mf-index: A Citation-Based Multiple Factor Index to Evaluate and Compare the Output of Scientists

    Get PDF
    Comparing the output of scientists as objective as possible is an important factor for, e.g., the approval of research funds or the filling of open positions at universities. Numeric indices, which express the scientific output in the form of a concrete value, may not completely supersede an overall view of a researcher, but provide helpful indications for the assessment. This work introduces the most important citation-based indices, analyzes their advantages and disadvantages and provides an overview of the aspects considered by them. On this basis, we identify the criteria that an advanced index should fulfill, and develop a new index, the mf-index. The objective of the mf-index is to combine the benefits of the existing indices, while avoiding as far as possible their drawbacks and to consider additional aspects. Finally, an evaluation based on data of real publications and citations compares the mf-index with existing indices and verifies that its advantages in theory can also be determined in practice

    First-mover advantage explains gender disparities in physics citations

    Full text link
    Mounting evidence suggests that publications and citations of scholars in the STEM fields (Science, Technology, Engineering and Mathematics) suffer from gender biases. In this paper, we study the physics community, a core STEM field in which women are still largely underrepresented and where these gender disparities persist. To reveal such inequalities, we compare the citations received by papers led by men and women that cover the same topics in a comparable way. To do that, we devise a robust statistical measure of similarity between publications that enables us to detect pairs of similar papers. Our findings indicate that although papers written by women tend to have lower visibility in the citation network, pairs of similar papers written by men and women receive comparable attention when corrected for the time of publication. These analyses suggest that gender disparity is closely related to the first-mover and cumulative advantage that men have in physics, and is not an intentional act of discrimination towards women.Comment: 21 pages, 8 tables, 10 figure

    Contributions towards understanding and building sustainable science

    Get PDF
    This dissertation focuses on either understanding and detecting threats to the epistemology of science (chapters 1-6) or making practical advances to remedy epistemological threats (chapters 7-9). Chapter 1 reviews the literature on responsible conduct of research, questionable research practices, and research misconduct. Chapter 2 reanalyzes Head et al (2015) their claims about widespread p-hacking for robustness. Chapter 3 examines 258,050 test results across 30,710 articles from eight high impact journals to investigate the existence of a peculiar prevalence of pp-values just below .05 (i.e., a bump) in the psychological literature, and a potential increase thereof over time. Chapter 4 examines evidence for false negatives in nonsignificant results throughout psychology, gender effects, and the Reproducibility Project: Psychology. Chapter 5 describes a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). In Chapter 6, I test the validity of statistical methods to detect fabricated data in two studies. Chapter 7 tackles the issue of data extraction from figures in scholarly publications. In Chapter 8 I argue that "after-the-fact" research papers do not help alleviate issues of access, selective publication, and reproducibility, but actually cause some of these threats because the chronology of the research cycle is lost in a research paper. I propose to give up the academic paper and propose a digitally native "as-you-go" alternative. In Chapter 9 I propose a technical design for this
    • …
    corecore