182,290 research outputs found

    National Scientific Facilities and Their Science Impact on Non-Biomedical Research

    Full text link
    H-index, proposed by Hirsch is a good indicator of the impact of a scientist's research. When evaluating departments, institutions or labs, the importance of h-index can be further enhanced when properly calibrated for size. Particularly acute is the issue of federally funded facilities whose number of actively publishing scientists frequently dwarfs that of academic departments. Recently Molinari and Molinari developed a methodology that shows the h-index has a universal growth rate for large numbers of papers, allowing for meaningful comparisons between institutions. An additional challenge when comparing large institutions is that fields have distinct internal cultures, with different typical rates of publication and citation; biology is more highly cited than physics, which is more highly cited than engineering. For this reason, this study has focused on the physical sciences, engineering, and technology, and has excluded bio-medical research. Comparisons between individual disciplines are reported here to provide contextual framework. Generally, it was found that the universal growth rate of Molinari and Molinari holds well across all the categories considered, testifying to the robustness of both their growth law and our results. The overall goal here is to set the highest standard of comparison for federal investment in science; comparisons are made with the nations preeminent private and public institutions. We find that many among the national facilities compare favorably in research impact with the nations leading universities.Comment: 22 pages, 7 figure

    Bibliometric analysis of scientific development in countries of the Union of South American Nations (Unasur)

    Get PDF
    The Union of South American Nations (Unasur) can be considered as a new emergent region in the world. By using advanced bibliometric methods, the development of science and technology in Unasur is explored. Based on data from the InCites tool of Thomson Reuters, which facilitates national comparisons across long time periods using publication output and normalized citation impact values, we explored how this region (particularly the most productive individual countries within it) is developing. The publication output results reveal an increase in the scientific and technological activities in most of theUnasur countries (especially Brazil). Compared to the rest of the world, the citation impact trend is less favourable for all Unasur countries

    A systematic empirical comparison of different approaches for normalizing citation impact indicators

    Get PDF
    We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators

    A review of the literature on citation impact indicators

    Full text link
    Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research
    corecore