383 research outputs found
A critical cluster analysis of 44 indicators of author-level performance
This paper explores the relationship between author-level bibliometric
indicators and the researchers the "measure", exemplified across five academic
seniorities and four disciplines. Using cluster methodology, the disciplinary
and seniority appropriateness of author-level indicators is examined.
Publication and citation data for 741 researchers across Astronomy,
Environmental Science, Philosophy and Public Health was collected in Web of
Science (WoS). Forty-four indicators of individual performance were computed
using the data. A two-step cluster analysis using IBM SPSS version 22 was
performed, followed by a risk analysis and ordinal logistic regression to
explore cluster membership. Indicator scores were contextualized using the
individual researcher's curriculum vitae. Four different clusters based on
indicator scores ranked researchers as low, middle, high and extremely high
performers. The results show that different indicators were appropriate in
demarcating ranked performance in different disciplines. In Astronomy the h2
indicator, sum pp top prop in Environmental Science, Q2 in Philosophy and
e-index in Public Health. The regression and odds analysis showed individual
level indicator scores were primarily dependent on the number of years since
the researcher's first publication registered in WoS, number of publications
and number of citations. Seniority classification was secondary therefore no
seniority appropriate indicators were confidently identified. Cluster
methodology proved useful in identifying disciplinary appropriate indicators
providing the preliminary data preparation was thorough but needed to be
supplemented by other analyses to validate the results. A general disconnection
between the performance of the researcher on their curriculum vitae and the
performance of the researcher based on bibliometric indicators was observed.Comment: 28 pages, 7 tables, 2 figures, 2 appendice
A review of the literature on citation impact indicators
Citation impact indicators nowadays play an important role in research
evaluation, and consequently these indicators have received a lot of attention
in the bibliometric and scientometric literature. This paper provides an
in-depth review of the literature on citation impact indicators. First, an
overview is given of the literature on bibliographic databases that can be used
to calculate citation impact indicators (Web of Science, Scopus, and Google
Scholar). Next, selected topics in the literature on citation impact indicators
are reviewed in detail. The first topic is the selection of publications and
citations to be included in the calculation of citation impact indicators. The
second topic is the normalization of citation impact indicators, in particular
normalization for field differences. Counting methods for dealing with
co-authored publications are the third topic, and citation impact indicators
for journals are the last topic. The paper concludes by offering some
recommendations for future research
Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories
[EN] Despite citation counts from Google Scholar (GS), Web of Science (WoS), and Scopus being widely consulted by researchers and sometimes used in research evaluations, there is no recent or systematic evidence about the differences between them. In response, this paper investigates 2,448,055 citations to 2299 English-language highly-cited documents from 252 GS subject categories published in 2006, comparing GS, the WoS Core Collection, and Scopus. GS consistently found the largest percentage of citations across all areas (93%¿96%), far ahead of Scopus (35%¿77%) and WoS (27%¿73%). GS found nearly all the WoS (95%) and Scopus (92%) citations. Most citations found only by GS were from non-journal sources (48%¿65%), including theses, books, conference papers, and unpublished materials. Many were non-English (19%¿38%), and they tended to be much less cited than citing sources that were also in Scopus or WoS. Despite the many unique GS citing sources, Spearman correlations between citation counts in GS and WoS or Scopus are high (0.78-0.99). They are lower in the Humanities, and lower between GS and WoS than between GS and Scopus. The results suggest that in all areas GS citation data is essentially a superset of WoS and Scopus, with substantial extra coverage.Alberto Martin-Martin is funded for a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educacion, Cultura, y Deportes (Spain). An international mobility grant from Universidad de Granada and CEI BioTic Granadafunded a research stay at the University of Wolverhampton.Martín-Martín, A.; Orduña-Malea, E.; Thelwall, M.; Delgado Lopez-Cozar, E. (2018). Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories. Journal of Informetrics. 12(4):1160-1177. https://doi.org/10.1016/j.joi.2018.09.0021160117712
Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison
This article aims to provide a systematic and comprehensive comparison of the coverage of the three major bibliometric databases: Google Scholar, Scopus and the Web of Science. Based on a sample of 146 senior academics in five broad disciplinary areas, we therefore provide both a longitudinal and a cross-disciplinary comparison of the three databases.
Our longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases. This suggests that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons.
Our cross-disciplinary comparison of the three databases includes four key research metrics (publications, citations, h-index, and hI,annual, an annualised individual h-index) and five major disciplines (Humanities, Social Sciences, Engineering, Sciences and Life Sciences). We show that both the data source and the specific metrics used change the conclusions that can be drawn from cross-disciplinary comparisons
Can Amazon.com reviews help to assess the wider impacts of books?
University of Wolverhampto
- …