2,844 research outputs found

    Information Metrics (iMetrics): A Research Specialty with a Socio-Cognitive Identity?

    Full text link
    "Bibliometrics", "scientometrics", "informetrics", and "webometrics" can all be considered as manifestations of a single research area with similar objectives and methods, which we call "information metrics" or iMetrics. This study explores the cognitive and social distinctness of iMetrics with respect to the general information science (IS), focusing on a core of researchers, shared vocabulary and literature/knowledge base. Our analysis investigates the similarities and differences between four document sets. The document sets are drawn from three core journals for iMetrics research (Scientometrics, Journal of the American Society for Information Science and Technology, and Journal of Informetrics). We split JASIST into document sets containing iMetrics and general IS articles. The volume of publications in this representation of the specialty has increased rapidly during the last decade. A core of researchers that predominantly focus on iMetrics topics can thus be identified. This core group has developed a shared vocabulary as exhibited in high similarity of title words and one that shares a knowledge base. The research front of this field moves faster than the research front of information science in general, bringing it closer to Price's dream.Comment: Accepted for publication in Scientometric

    Qualitative conditions of scientometrics: the new challenges'

    Get PDF
    While scientometrics is now an established field, there are challenges. A closer look at how scientometricians aggregate building blocks into artfully made products, and point-represent these (e.g. as the map of field X) allows one to overcome the dependence on judgements of scientists for validation, and replace or complement these with intrinsic validation, based on quality checks of the several steps. Such quality checks require qualitative analysis of the domains being studied. Qualitative analysis is also necessary when noninstitutionalized domains and/or domains which do not emphasize texts are to be studied. A further challenge is to reflect on the effects of scientometrics on the development of science; indicators could lead to `induced¿ aggregation. The availability of scientometric tools and insights might allow scientists and science to become more reflexive

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods

    Co-word maps of biotechnology: an example of cognitive scientometrics

    Get PDF
    To analyse developments of scientific fields, scientometrics provides useful tools, provided one is prepared to take the content of scientific articles into account. Such cognitive scientometrics is illustrated by using as data a ten-year period of articles from a biotechnology core journal. After coding with key-words, the relations between articles are brought out by co-word analysis. Maps of the field are given, showing connections between areas and their change over time, and with respect to the institutions in which research is performed. In addition, other approaches are explored, including an indicator of lsquotheoretical levelrsquo of bodies of articles

    Citation gaming induced by bibliometric evaluation: a country-level comparative analysis

    Get PDF
    It is several years since national research evaluation systems around the globe started making use of quantitative indicators to measure the performance of researchers. Nevertheless, the effects on these systems on the behavior of the evaluated researchers are still largely unknown. We attempt to shed light on this topic by investigating how Italian researchers reacted to the introduction in 2011 of national regulations in which key passages of professional careers are governed by bibliometric indicators. A new inwardness measure, able to gauge the degree of scientific self-referentiality of a country, is defined as the proportion of citations coming from the country itself compared to the total number of citations gathered by the country. Compared to the trends of the other G10 countries in the period 2000-2016, Italy's inwardness shows a net increase after the introduction of the new evaluation rules. Indeed, globally and also for a large majority of the research fields, Italy became the European country with the highest inwardness. Possible explanations are proposed and discussed, concluding that the observed trends are strongly suggestive of a generalized strategic use of citations, both in the form of author self-citations and of citation clubs. We argue that the Italian case offers crucial insights on the constitutive effects of evaluation systems. As such, it could become a paradigmatic case in the debate about the use of indicators in science-policy contexts

    Contribution of Information and Communication Technology (ICT) in Country’S H-Index

    Get PDF
    The aim of this study is to examine the effect of Information and Communication Technology (ICT) development on country’s scientific ranking as measured by H-index. Moreover, this study applies ICT development sub-indices including ICT Use, ICT Access and ICT skill to find the distinct effect of these sub-indices on country’s H-index. To this purpose, required data for the panel of 14 Middle East countries over the period 1995 to 2009 is collected. Findings of the current study show that ICT development increases the H-index of the sample countries. The results also indicate that ICT Use and ICT Skill sub-indices positively contribute to higher H-index but the effect of ICT access on country’s H-index is not clear

    An Integrated Impact Indicator (I3): A New Definition of "Impact" with Policy Relevance

    Full text link
    Allocation of research funding, as well as promotion and tenure decisions, are increasingly made using indicators and impact factors drawn from citations to published work. A debate among scientometricians about proper normalization of citation counts has resolved with the creation of an Integrated Impact Indicator (I3) that solves a number of problems found among previously used indicators. The I3 applies non-parametric statistics using percentiles, allowing highly-cited papers to be weighted more than less-cited ones. It further allows unbundling of venues (i.e., journals or databases) at the article level. Measures at the article level can be re-aggregated in terms of units of evaluation. At the venue level, the I3 creates a properly weighted alternative to the journal impact factor. I3 has the added advantage of enabling and quantifying classifications such as the six percentile rank classes used by the National Science Board's Science & Engineering Indicators.Comment: Research Evaluation (in press
    corecore