1,519 research outputs found

    Bias in the journal impact factor

    Full text link
    The ISI journal impact factor (JIF) is based on a sample that may represent half the whole-of-life citations to some journals, but a small fraction (<10%) of the citations accruing to other journals. This disproportionate sampling means that the JIF provides a misleading indication of the true impact of journals, biased in favour of journals that have a rapid rather than a prolonged impact. Many journals exhibit a consistent pattern of citation accrual from year to year, so it may be possible to adjust the JIF to provide a more reliable indication of a journal's impact.Comment: 9 pages, 8 figures; one reference correcte

    Studying the heterogeneity of European higher education institutions

    Get PDF
    The heterogeneity of the Higher Education (HE) Institutions is one of the main critical issues in the assessment of their performance. This paper adopts a multi-level and multi-dimensional perspective, combining national (macro) and institution (micro) level data, and measuring both research and teaching activity, using performance indicators derived from the European Tertiary Education Register, CWTS Leiden Ranking, and PATSTAT patent database. Clustering and efficiency analysis are combined to characterize the heterogeneity of national HE systems in European countries, and reveal the potential of using micro level data to characterize national level performance. Large differences are observed between the European countries, partially due to the fact that they are in different phases of their scientific (and economic) development and of the re-structuring of their HE systems. Evidence is found that universities specializing either in teaching or in research tend to have a higher efficiency than those institutions balancing research and teaching. Tradeoffs are observed between undergraduate and post-graduate activities, and a “Matthew cumulative effect” seems in place on the European institutions analysed: high quality research is able to attract external funds that stimulate innovative and patenting activities that in turn are self-reinforcing to the scientific activities. The results reveal once more the limits and dangers of one-dimensional approaches to the performance of HEIs

    Novel Approaches to the Development and Application of Informetric and Scientometric Tools Special Issue of Journal of Data and Information Science on ISSI2019 Conference-Part II

    Get PDF
    This is the second part of the Journal of Data and Information Science (JDIS) Special Issue on ISSI 2019, the 17th International Conference on Scientometrics and Informetrics (ISSI2019) held in Rome, on 2–5 September 2019 and includes additional 10 selected posters presented during the conference largely expanded by the authors afterwards. The papers included in this volume have been grouped in three broad themes: - Indicators &amp; Databases (4 papers); - Social context, Innovation, and Policy (3 papers); - Application domains (3 papers)

    The 17th International Conference on Scientometrics and Informetrics

    Get PDF
    [No abstract available

    The success-index: an alternative approach to the h-index for evaluating an individual's research output

    Get PDF
    Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc

    Universality of Performance Indicators based on Citation and Reference Counts

    Full text link
    We find evidence for the universality of two relative bibliometric indicators of the quality of individual scientific publications taken from different data sets. One of these is a new index that considers both citation and reference counts. We demonstrate this universality for relatively well cited publications from a single institute, grouped by year of publication and by faculty or by department. We show similar behaviour in publications submitted to the arXiv e-print archive, grouped by year of submission and by sub-archive. We also find that for reasonably well cited papers this distribution is well fitted by a lognormal with a variance of around 1.3 which is consistent with the results of Radicchi, Fortunato, and Castellano (2008). Our work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph. Further, it shows that averages of the logarithm of such relative bibliometric indices deal with the issue of long tails and avoid the need for statistics based on lengthy ranking procedures.Comment: 15 pages, 14 figures, 11 pages of supplementary material. Submitted to Scientometric

    A New Approach to Analyzing Patterns of Collaboration in Co-authorship Networks - Mesoscopic Analysis and Interpretation

    Full text link
    This paper focuses on methods to study patterns of collaboration in co-authorship networks at the mesoscopic level. We combine qualitative methods (participant interviews) with quantitative methods (network analysis) and demonstrate the application and value of our approach in a case study comparing three research fields in chemistry. A mesoscopic level of analysis means that in addition to the basic analytic unit of the individual researcher as node in a co-author network, we base our analysis on the observed modular structure of co-author networks. We interpret the clustering of authors into groups as bibliometric footprints of the basic collective units of knowledge production in a research specialty. We find two types of coauthor-linking patterns between author clusters that we interpret as representing two different forms of cooperative behavior, transfer-type connections due to career migrations or one-off services rendered, and stronger, dedicated inter-group collaboration. Hence the generic coauthor network of a research specialty can be understood as the overlay of two distinct types of cooperative networks between groups of authors publishing in a research specialty. We show how our analytic approach exposes field specific differences in the social organization of research.Comment: An earlier version of the paper was presented at ISSI 2009, 14-17 July, Rio de Janeiro, Brazil. Revised version accepted on 2 April 2010 for publication in Scientometrics. Removed part on node-role connectivity profile analysis after finding error in calculation and deciding to postpone analysis

    Does the arXiv lead to higher citations and reduced publisher downloads for mathematics articles?

    Full text link
    An analysis of 2,765 articles published in four math journals from 1997 to 2005 indicate that articles deposited in the arXiv received 35% more citations on average than non-deposited articles (an advantage of about 1.1 citations per article), and that this difference was most pronounced for highly-cited articles. Open Access, Early View, and Quality Differential were examined as three non-exclusive postulates for explaining the citation advantage. There was little support for a universal Open Access explanation, and no empirical support for Early View. There was some inferential support for a Quality Differential brought about by more highly-citable articles being deposited in the arXiv. In spite of their citation advantage, arXiv-deposited articles received 23% fewer downloads from the publisher's website (about 10 fewer downloads per article) in all but the most recent two years after publication. The data suggest that arXiv and the publisher's website may be fulfilling distinct functional needs of the reader.Comment: Last updated May 02, 200
    • …
    corecore