134,731 research outputs found
The assessment of science: the relative merits of post- publication review, the impact factor, and the number of citations
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative
Would it be possible to increase the Hirsch-index, pi-index or CDS-index by increasing the number of publications or citations only by unity?
The aim of the study is to explore the effects of the increase in the number of publications or citations on several impact indicators by a single journal paper or citation. The possible change of the h-index, A-index, R-index, pi-index, pi-rate, Journal Paper Citedness (JPC), and Citation Distribution Score (CDS) is followed by models. Particular attention is given to the increase of the indices by a single plus citation. The results obtained by the “Successively Built-up Indicator” model show that with increasing number of citations or self-citations the indices may increase substantially
Measuring co-authorship and networking-adjusted scientific impact
Appraisal of the scientific impact of researchers, teams and institutions
with productivity and citation metrics has major repercussions. Funding and
promotion of individuals and survival of teams and institutions depend on
publications and citations. In this competitive environment, the number of
authors per paper is increasing and apparently some co-authors don't satisfy
authorship criteria. Listing of individual contributions is still sporadic and
also open to manipulation. Metrics are needed to measure the networking
intensity for a single scientist or group of scientists accounting for patterns
of co-authorship. Here, I define I1 for a single scientist as the number of
authors who appear in at least I1 papers of the specific scientist. For a group
of scientists or institution, In is defined as the number of authors who appear
in at least In papers that bear the affiliation of the group or institution. I1
depends on the number of papers authored Np. The power exponent R of the
relationship between I1 and Np categorizes scientists as solitary (R>2.5),
nuclear (R=2.25-2.5), networked (R=2-2.25), extensively networked (R=1.75-2) or
collaborators (R<1.75). R may be used to adjust for co-authorship networking
the citation impact of a scientist. In similarly provides a simple measure of
the effective networking size to adjust the citation impact of groups or
institutions. Empirical data are provided for single scientists and
institutions for the proposed metrics. Cautious adoption of adjustments for
co-authorship and networking in scientific appraisals may offer incentives for
more accountable co-authorship behaviour in published articles.Comment: 25 pages, 5 figure
Persistence and Uncertainty in the Academic Career
Understanding how institutional changes within academia may affect the
overall potential of science requires a better quantitative representation of
how careers evolve over time. Since knowledge spillovers, cumulative advantage,
competition, and collaboration are distinctive features of the academic
profession, both the employment relationship and the procedures for assigning
recognition and allocating funding should be designed to account for these
factors. We study the annual production n_{i}(t) of a given scientist i by
analyzing longitudinal career data for 200 leading scientists and 100 assistant
professors from the physics community. We compare our results with 21,156
sports careers. Our empirical analysis of individual productivity dynamics
shows that (i) there are increasing returns for the top individuals within the
competitive cohort, and that (ii) the distribution of production growth is a
leptokurtic "tent-shaped" distribution that is remarkably symmetric. Our
methodology is general, and we speculate that similar features appear in other
disciplines where academic publication is essential and collaboration is a key
feature. We introduce a model of proportional growth which reproduces these two
observations, and additionally accounts for the significantly right-skewed
distributions of career longevity and achievement in science. Using this
theoretical model, we show that short-term contracts can amplify the effects of
competition and uncertainty making careers more vulnerable to early
termination, not necessarily due to lack of individual talent and persistence,
but because of random negative production shocks. We show that fluctuations in
scientific production are quantitatively related to a scientist's collaboration
radius and team efficiency.Comment: 29 pages total: 8 main manuscript + 4 figs, 21 SI text + fig
How citation boosts promote scientific paradigm shifts and Nobel Prizes
Nobel Prizes are commonly seen to be among the most prestigious achievements
of our times. Based on mining several million citations, we quantitatively
analyze the processes driving paradigm shifts in science. We find that
groundbreaking discoveries of Nobel Prize Laureates and other famous scientists
are not only acknowledged by many citations of their landmark papers.
Surprisingly, they also boost the citation rates of their previous
publications. Given that innovations must outcompete the rich-gets-richer
effect for scientific citations, it turns out that they can make their way only
through citation cascades. A quantitative analysis reveals how and why they
happen. Science appears to behave like a self-organized critical system, in
which citation cascades of all sizes occur, from continuous scientific progress
all the way up to scientific revolutions, which change the way we see our
world. Measuring the "boosting effect" of landmark papers, our analysis reveals
how new ideas and new players can make their way and finally triumph in a world
dominated by established paradigms. The underlying "boost factor" is also
useful to discover scientific breakthroughs and talents much earlier than
through classical citation analysis, which by now has become a widespread
method to measure scientific excellence, influencing scientific careers and the
distribution of research funds. Our findings reveal patterns of collective
social behavior, which are also interesting from an attention economics
perspective. Understanding the origin of scientific authority may therefore
ultimately help to explain, how social influence comes about and why the value
of goods depends so strongly on the attention they attract.Comment: 6 pages, 6 figure
A review of the characteristics of 108 author-level bibliometric indicators
An increasing demand for bibliometric assessment of individuals has led to a
growth of new bibliometric indicators as well as new variants or combinations
of established ones. The aim of this review is to contribute with objective
facts about the usefulness of bibliometric indicators of the effects of
publication activity at the individual level. This paper reviews 108 indicators
that can potentially be used to measure performance on the individual author
level, and examines the complexity of their calculations in relation to what
they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201
- …