9 research outputs found

    A new reference standard for citation analysis in chemistry and related fields based on the sections of Chemical Abstracts

    Get PDF
    Citation analysis for evaluative purposes requires reference standards, as publication activity and citation habits differ considerably among fields. Reference standards based on journal classification schemes are fraught with problems in the case of multidisciplinary and general journals and are limited with respect to their resolution of fields. To overcome these shortcomings of journal classification schemes, we propose a new reference standard for chemistry and related fields that is based on the sections of the Chemical Abstracts database. We determined the values of the reference standard for research articles published in 2000 in the biochemistry sections of Chemical Abstracts as an example. The results show that citation habits vary extensively not only between fields but also within fields. Overall, the sections of Chemical Abstracts seem to be a promising basis for reference standards in chemistry and related fields for four reasons: (1) The wider coverage of the pertinent literature, (2) the quality of indexing, (3) the assignment of papers published in multidisciplinary and general journals to their respective fields, and (4) the resolution of fields on a lower level (e.g. mammalian biochemistry) than in journal classification schemes (e.g. biochemistry & molecular biology

    A review of the literature on citation impact indicators

    Full text link
    Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research

    Sección Bibliográfica

    Get PDF

    Field-normalized citation impact indicators using algorithmically constructed classification systems of science

    Get PDF
    We study the problem of normalizing citation impact indicators for differences in citation practices across scientific fields. Normalization of citation impact indicators is usually done based on a field classification system. In practice, the Web of Science journal subject categories are often used for this purpose. However, many of these subject categories have a quite broad scope and are not sufficiently homogeneous in terms of citation practices. As an alternative, we propose to work with algorithmically constructed classification systems. We construct these classification systems by performing a large-scale clustering of publications based on their citation relations. In our analysis, 12 classification systems are constructed, each at a different granularity level. The number of fields in these systems ranges from 390 to 73,205 in granularity levels 1 to 12. This contrasts with the 236 subject categories in the WoS classification system. Based on an investigation of some key characteristics of the 12 classification systems, we argue that working with a few thousand fields may be an optimal choice. We then study the effect of the choice of a classification system on the citation impact of the 500 universities included in the 2013 edition of the CWTS Leiden Ranking. We consider both the MNCS and the PPtop 10% indicator. Globally, for all the universities taken together citation impact indicators generally turn out to be relatively insensitive to the choice of a classification system. Nevertheless, for individual universities, we sometimes observe substantial differences between indicators normalized based on the journal subject categories and indicators normalized based on an appropriately chosen algorithmically constructed classification system.Ruiz- Castillo also acknowledges financial help from the Spanish MEC through grant ECO2011-2976

    Diseño e implementación de una metodología para la elaboración de listas básicas utilizando criterios bibliométricos para determinar el nivel de apoyo que brindan las revistas científicas a la investigación. Caso Centro Cultural Biblioteca Luis Echavarría Villegas de la Universidad EAFIT

    Get PDF
    Éste trabajo presenta el diseño y la aplicación de una metodología para la elaboración de listas básicas de publicaciones periódicas en revistas científicas, sobre colecciones que son gestionadas en el Centro Cultural Biblioteca Luis Echavarría Villegas de la Universidad EAFIT -- Esto con el fin de establecer actividades de seguimiento y evaluación sobre el nivel en que estas colecciones apoyan los procesos de investigación institucional -- Esta metodología se fundamenta en diferentes instrumentos, criterios bibliométricos y aplicación de actividades manuales y automáticas que permiten su desarrollo, entre los cuales se encuentran la citación local, el factor de impacto y el uso local o frecuencia de uso, enmarcado en un periodo de tiempo específico -- Todo este proceso está directamente relacionado con el trabajo que se adelanta en el área de Desarrollo de Colecciones, que pertenece a la Coordinación de Recursos de Información del Centro Cultural Biblioteca Luis Echavarría Villegas de la Universidad EAFIT, en Medellín, Colombi
    corecore