12 research outputs found

    Revisiting h measured on UK LIS and IR academics

    Get PDF
    A brief communication appearing in this journal ranked UK LIS and (some) IR academics by their h-index using data derived from Web of Science. In this brief communication, the same academics were re-ranked, using other popular citation databases. It was found that for academics who publish more in computer science forums, their h was significantly different due to highly cited papers missed by Web of Science; consequently their rank changed substantially. The study was widened to a broader set of UK LIS and IR academics where results showed similar statistically significant differences. A variant of h, hmx, was introduced that allowed a ranking of the academics using all citation databases together

    Measuring research impact: A first approximation of the achievements of the iSchools in ISI's information and library science category ??? An exploratory study

    Get PDF
    In this paper, we analyze those publications of the home institutes of the iSchools that are indexed by Thomson Reuters (ISI) Web of Science in the information science and library science category, and were published between 2000 and 2009

    Webometric analysis of departments of librarianship and information science: a follow-up study

    Get PDF
    This paper reports an analysis of the websites of UK departments of library and information science. Inlink counts of these websites revealed no statistically significant correlation with the quality of the research carried out by these departments, as quantified using departmental grades in the 2001 Research Assessment Exercise and citations in Google Scholar to publications submitted for that Exercise. Reasons for this lack of correlation include: difficulties in disambiguating departmental websites from larger institutional structures; the relatively small amount of research-related material in departmental websites; and limitations in the ways that current Web search engines process linkages to URLs. It is concluded that departmental-level webometric analyses do not at present provide an appropriate technique for evaluating academic research quality, and, more generally, that standards are needed for the formatting of URLs if inlinks are to become firmly established as a tool for website analysis

    Ranking of library and information science researchers: Comparison of data sources for correlating citation data, and expert judgments

    Get PDF
    This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). Forty-two LIS experts provided judgments on a 5-point scale of the quality of research published by 101 scholars; the median rankings resulting from these judgments were then correlated with h-, g- and H-index values computed using three different sources of citation data: Web of Science (WoS), Scopus and Google Scholar (GS). The two variants of the basic h-index correlated more strongly with peer judgment than did the h-index itself; citation data from Scopus was more strongly correlated with the expert judgments than was data from GS, which in turn was more strongly correlated than data from WoS; correlations from a carefully cleaned version of GS data were little different from those obtained using swiftly gathered GS data; the indices from the citation databases resulted in broadly similar rankings of the LIS academics; GS disadvantaged researchers in bibliometrics compared to the other two citation database while WoS disadvantaged researchers in the more technical aspects of information retrieval; and experts from the UK and other European countries rated UK academics with higher scores than did experts from the USA. (C) 2010 Elsevier Ltd. All rights reserved

    The development of computer science research in the People's Republic of China 2000-2009: A bibliometric study

    No full text
    This paper reports a bibliometric study of the development of computer science research in the People's Republic of China in the 21st century, using data from the Web of Science, Journal Citation Reports and CORE databases. Focusing on the areas of data mining, operating systems and web design, it is shown that whilst the productivity of Chinese research has risen dramatically over the period under review, its impact is still low when compared with established scientific nations such as the USA, the UK and Japan. The publication and citation data for China are compared with corresponding data for the other three BRIC nations (Brazil, Russian and India). It is shown that China dominates the BRIC nations in terms of both publications and citations, but that Indian publications often have a greater individual impact. © The Author(s) 2012

    The Malaysian Journal of Computer Science 1996-2006: a bibliometric study

    No full text
    This paper analyses publication and citation patterns in the Malaysian Journal of Computer Science(MJCS) from 1996-2006. The articles in MJCS are mostly written by Malaysian academics, with only limited inputs from international sources. Comparisons are made with the companion Malaysian Journal of Library and Information Science in terms of the type, number of references, length and numbers of authors for individual papers. Searches of Google Scholar showed that 53 MJCS articles attracted a total of 86 citations, of which 43 were self-citations

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Counting research ⇒ directing research : the hazard of using simple metrics to evaluate scientific contributions : an EU experience

    Get PDF
    In many EU countries there is a requirement to count research, i.e., to measure and prove its value. These numbers, often produced automatically based on the impact of journals, are used to rank universities, to determine fund distribution, to evaluate research proposals, and to determine the scientific merit of each researcher. While the real value of research may be difficult to measure, one avoids this problem by counting papers and citations in well-known journals. That is, the measured impact of a paper (and the scientific contribution) is defined to be equal to the impact of the journal that publishes it. The journal impact (and its scientific value) is then based on the references to papers in this journal. This ignores the fact that there may be huge differences between papers in the same journal; that there are significant discrepancies between impact values of different scientific areas; that research results may be offered outside the journals; and that citations may not be a good index for value. Since research is a collaborative activity, it may also be difficult to measure the contributions of each individual scientist. However, the real danger is not that the contributions may be counted wrongly, but that the measuring systems will also have a strong influence on the way we perform research. Keywords: Counting research, h-index, journal publications, JCR, ranking.publishedVersio

    Convergent validity of bibliometric Google Scholar data in the field of chemistry: Citation counts for papers that were accepted by Angewandte Chemie International Edition or rejected but published elsewhere, using Google Scholar, Science Citation Index, Scopus, and Chemical Abstracts

    Get PDF
    Examining a comprehensive set of papers (n = 1837) that were accepted for publication by the journal Angewandte Chemie International Edition (one of the prime chemistry journals in the world) or rejected by the journal but then published elsewhere, this study tested the extent to which the use of the freely available database Google Scholar (GS) can be expected to yield valid citation counts in the field of chemistry. Analyses of citations for the set of papers returned by three fee-based databases – Science Citation Index, Scopus, and Chemical Abstracts – were compared to the analysis of citations found using GS data. Whereas the analyses using citations returned by the three fee-based databases show very similar results, the results of the analysis using GS citation data differed greatly from the findings using citations from the fee-based databases. Our study therefore supports, on the one hand, the convergent validity of citation analyses based on data from the fee-based databases and, on the other hand, the lack of convergent validity of the citation analysis based on the GS data

    Google Scholar como una fuente de evaluación científica: una revisión bibliográfica sobre errores de la base de datos

    Get PDF
    [ES] Google Scholar es un motor de búsqueda académico y herramienta de descubrimiento lanzada por Google (ahora Alphabet) en noviembre de 2004. El hecho de que para cada registro bibliográfico se proporcione información acerca del número de citas recibidas por dicho registro desde el resto de registros indizados en el sistema (independientemente de su fuente) ha propiciado su utilización en análisis bibliométricos y en procesos de evaluación de la actividad académica, especialmente en Ciencias Sociales y Humanidades. No obstante, la existencia de errores, en ocasiones de gran magnitud, ha provocado su rechazo y crítica por una parte de la comunidad científica. Este trabajo pretende precisamente realizar una revisión bibliográfica exhaustiva de todos los estudios que de forma monográfica o colateral proporcionan alguna evidencia empírica sobre cuáles son los errores cometidos por Google Scholar (y productos derivados, como Google Scholar Metrics y Google Scholar Citations). Los resultados indican que el corpus bibliográfico dedicado a los errores en Google Scholar es todavía escaso (n= 49), excesivamente fragmentado, disperso, con resultados obtenidos sin metodologías sistemáticas y en unidades no comparables entre sí, por lo que su cuantificación y su efecto real no pueden ser caracterizados con precisión. Ciertas limitaciones del propio buscador (tiempo requerido de limpieza de datos, límite de citas por registro y resultados por consulta) podrían ser la causa de esta ausencia de trabajos empíricos.[EN] Google Scholar (GS) is an academic search engine and discovery tool launched by Google (now Alphabet) in November 2004. The fact that GS provides the number of citations received by each article from all other indexed articles (regardless of their source) has led to its use in bibliometric analysis and academic assessment tasks, especially in social sciences and humanities. However, the existence of errors, sometimes of great magnitude, has provoked criticism from the academic community. The aim of this article is to carry out an exhaustive bibliographical review of all studies that provide either specific or incidental empirical evidence of the errors found in Google Scholar. The results indicate that the bibliographic corpus dedicated to errors in Google Scholar is still very limited (n=49), excessively fragmented, and diffuse; the findings have not been based on any systematic methodology or on units that are comparable to each other, so they cannot be quantified, or their impact analysed, with any precision. Certain limitations of the search engine itself (time required for data cleaning, limit on citations per search result and hits per query) may be the cause of this absence of empirical studies.Alberto Martin-Martin is on a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educacion, Cultura y Deportes (Spain). Enrique Orduna-Malea holds a postdoctoral fellowship (PAID-10-14), from the Polytechnic University of Valencia (Spain).Orduña Malea, E.; Martín-Martín, A.; Delgado-López-Cózar, E. (2017). Google Scholar as a source for scholarly evaluation: a bibliographic review of database errors. Revista española de Documentación Científica. 40(4):1-33. https://doi.org/10.3989/redc.2017.4.1500S133404White, B. (2006). Examining the claims of Google Scholar as a serious information source. New Zealand Library & Information Management Journal, 50 (1), 11-24.Wleklinski, J.M. (2005). Studying Google Scholar: wall to wall coverage?. Online, 29 (3), 22-26.Yang, K., & Meho, L. I. (2007). Citation Analysis: A Comparison of Google Scholar, Scopus, and Web of Science. Proceedings of the American Society for Information Science and Technology, 43(1), 1-15. doi:10.1002/meet.1450430118
    corecore