69 research outputs found

    Contribution of Information and Communication Technology (ICT) in Country’S H-Index

    Get PDF
    The aim of this study is to examine the effect of Information and Communication Technology (ICT) development on country’s scientific ranking as measured by H-index. Moreover, this study applies ICT development sub-indices including ICT Use, ICT Access and ICT skill to find the distinct effect of these sub-indices on country’s H-index. To this purpose, required data for the panel of 14 Middle East countries over the period 1995 to 2009 is collected. Findings of the current study show that ICT development increases the H-index of the sample countries. The results also indicate that ICT Use and ICT Skill sub-indices positively contribute to higher H-index but the effect of ICT access on country’s H-index is not clear

    WskaĆșniki bibliometryczne w ocenie aktywnoƛci publikacyjnej pracownikĂłw naukowych

    Get PDF
    The assessment of the scientific potential of institutions can be made by using a variety of numerical indicators. In order to determine the scientific level of Polish institutions one should apply the indicators and methods used for assessing international learning. To this end, researchers frequently use comparisons of publishing activity, frequency of citation and bibliometric indicators aimed at illustrating more qualitative than quantitative status of science. The paper presents the most commonly used measures of publishing activity evaluation (the number of publications, the number of citations, the Impact Factor, the Hirsch Index). It also discusses the pros and cons of the Hirsch Index as a method of evaluation of scientific achievements of both individuals and institutions. In addition, the authors have made reference to a system based on the Impact Factor

    The measurement of low- and high-impact in citation distributions: technical results

    Get PDF
    This paper introduces a novel methodology for comparing the citation distributions of research units working in the same homogeneous field. Given a critical citation level (CCL), we suggest using two real valued indicators to describe the shape of any distribution: a highimpact and a low-impact measure defined over the set of articles with citations above or below the CCL. The key to this methodology is the identification of a citation distribution with an income distribution. Once this step is taken, it is easy to realize that the measurement of lowimpact coincides with the measurement of economic poverty. In turn, it is equally natural to identify the measurement of high-impact with the measurement of a certain notion of economic affluence. On the other hand, it is seen that the ranking of citation distributions according to a family of low-impact measures, originally suggested by Foster et al. (1984) for the measurement of economic poverty, is essentially characterized by a number of desirable axioms. Appropriately redefined, these same axioms lead to the selection of an equally convenient class of decomposable high-impact measures. These two families are shown to satisfy other interesting properties that make them potentially useful in empirical applications, including the comparison of research units working in different fields.

    The citation merit of scientific publications

    Get PDF
    We propose a new method to assess the merit of any set of scientific papers in a given field based on the citations they receive. Given a citation indicator, such as the mean citation or the h-index, we identify the merit of a given set of n articles with the probability that a randomly drawn sample of n articles from a reference set of articles in that field presents a lower citation index. The method allows for comparisons between research units of different sizes and fields. Using a dataset acquired from Thomson Scientific that contains the articles published in the periodical literature in the period 1998-2007, we show that the novel approach yields rankings of research units different from those obtained by a direct application of the mean citation or the h-index.Citation analysis, Citation merit, Mean citation, h-index

    References made and citations received by scientific articles

    Get PDF
    This paper studies massive evidence about references made and citations received after a five-year citation window by 3.7 million articles published in 1998-2002 in 22 scientific fields. We find that the distributions of references made and citations received share a number of basic features across sciences. Reference distributions are rather skewed to the right, while citation distributions are even more highly skewed: the mean is about 20 percentage points to the right of the median, and articles with a remarkable or outstanding number of citations represent about 9% of the total. Moreover, the existence of a power law representing the upper tail of citation distributions cannot be rejected in 17 fields whose articles represent 74.7% of the total. Contrary to the evidence in other contexts, the value of the scale parameter is above 3.5 in 13 of the 17 cases. Finally, power laws are typically small but capture a considerable proportion of the total citations received.

    A comparison of the scientific performance of the U. S. and the European Union at the turn of the XXI century.

    Get PDF
    In this paper, scientific performance is identified with the impact journal articles achieve through the citations they receive. The empirical exercise refers to 3.6 million articles published in 1998-2002 in 22 scientific fields, and the more than 47 million citations they receive in 1998-2007. The first finding is that a failure to exclude co-authorship among member countries within the EU (European Union) may lead to a serious upward bias in the assignment of articles to this geographical area. In the second place, standard indicators, such as normalized mean citation ratios, are silent about what takes place in different parts of the citation distribution. Consequently, this paper compares the publication shares of the U.S. and the EU at every percentile of the world citation distribution in each field. In 15 disciplines, as well as in all sciences as a whole, the EU share of total publications is greater than that of the U.S. one. But as soon as the citations received by these publications are taken into account the picture is completely reversed. The mean citation rate in the U.S. is greater than in the EU in every one of the 22 fields. In seven fields, the initial gap between the U.S. and the EU widens up as we advance towards the more cited articles, while in the remaining 15 fields –except for Agricultural Sciences– the U.S. always surpasses the EU when it counts, namely, at the upper tail of citation distributions. For all sciences as a whole, the U.S publication share becomes greater than that of the EU one for the top 50% of the most highly cited articles.

    Contribution of Information and Communication Technology (ICT) in Country’S H-Index

    Get PDF
    The aim of this study is to examine the effect of Information and Communication Technology (ICT) development on country’s scientific ranking as measured by H-index. Moreover, this study applies ICT development sub-indices including ICT Use, ICT Access and ICT skill to find the distinct effect of these sub-indices on country’s H-index. To this purpose, required data for the panel of 14 Middle East countries over the period 1995 to 2009 is collected. Findings of the current study show that ICT development increases the H-index of the sample countries. The results also indicate that ICT Use and ICT Skill sub-indices positively contribute to higher H-index but the effect of ICT access on country’s H-index is not clear

    WskaĆșniki bibliometryczne w ocenie aktywnoƛci publikacyjnej pracownikĂłw naukowych

    Get PDF
    The assessment of the scientific potential of institutions can be made by using a variety of numerical indicators. In order to determine the scientific level of Polish institutions one should apply the indicators and methods used for assessing international learning. To this end, researchers frequently use comparisons of publishing activity, frequency of citation and bibliometric indicators aimed at illustrating more qualitative than quantitative status of science. The paper presents the most commonly used measures of publishing activity evaluation (the number of publications, the number of citations, the Impact Factor, the Hirsch Index). It also discusses the pros and cons of the Hirsch Index as a method of evaluation of scientific achievements of both individuals and institutions. In addition, the authors have made reference to a system based on the Impact Factor
    • 

    corecore