11,785 research outputs found

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods

    Qualitative conditions of scientometrics: the new challenges'

    Get PDF
    While scientometrics is now an established field, there are challenges. A closer look at how scientometricians aggregate building blocks into artfully made products, and point-represent these (e.g. as the map of field X) allows one to overcome the dependence on judgements of scientists for validation, and replace or complement these with intrinsic validation, based on quality checks of the several steps. Such quality checks require qualitative analysis of the domains being studied. Qualitative analysis is also necessary when noninstitutionalized domains and/or domains which do not emphasize texts are to be studied. A further challenge is to reflect on the effects of scientometrics on the development of science; indicators could lead to `induced¿ aggregation. The availability of scientometric tools and insights might allow scientists and science to become more reflexive

    A webometric analysis of Australian Universities using staff and size dependent web impact factors (WIF)

    Get PDF
    This study describes how search engines (SE) can be employed for automated, efficient data gathering for Webometric studies using predictable URLs. It then compares the usage of staffrelated Web Impact Factors (WIFs) to sizerelated impact factors for a ranking of Australian universities, showing that rankings based on staffrelated WIFs correlate much better with an established ranking from the Melbourne Institute than commonly used sizedependent WIFs. In fact sizedependent WIFs do not correlate with the Melbourne ranking at all. It also compares WIF data for Australian Universities provided by Smith (1999) for a longitudinal comparison of the WIF of Australian Universities over the last decade. It shows that sizedependent WIF values declined for most Australian universities over the last ten years, while staffdependent WIFs rose

    The use of S&T indicators in science policy: Dutch experiences and theoretical perspectives from policy analysis

    Get PDF
    The relation between bibliometrics and science policy remains underdeveloped. Relevance of new methods to produce indicators is easily claimed, but often without real insight in the policy processes. Drawing on experiences with the use of S&T indicators in science policy in the Netherlands and on principal-agent theory, I develop an analytical perspective which enbles to assess the role of S&T indicators in science policy. It is argue that the use of S&T indicators can only be understood well if one takes the socio-political context with its specific dynamics and rationalities into account

    Co-word maps of biotechnology: an example of cognitive scientometrics

    Get PDF
    To analyse developments of scientific fields, scientometrics provides useful tools, provided one is prepared to take the content of scientific articles into account. Such cognitive scientometrics is illustrated by using as data a ten-year period of articles from a biotechnology core journal. After coding with key-words, the relations between articles are brought out by co-word analysis. Maps of the field are given, showing connections between areas and their change over time, and with respect to the institutions in which research is performed. In addition, other approaches are explored, including an indicator of lsquotheoretical levelrsquo of bodies of articles

    Editorial for the First Workshop on Mining Scientific Papers: Computational Linguistics and Bibliometrics

    Full text link
    The workshop "Mining Scientific Papers: Computational Linguistics and Bibliometrics" (CLBib 2015), co-located with the 15th International Society of Scientometrics and Informetrics Conference (ISSI 2015), brought together researchers in Bibliometrics and Computational Linguistics in order to study the ways Bibliometrics can benefit from large-scale text analytics and sense mining of scientific papers, thus exploring the interdisciplinarity of Bibliometrics and Natural Language Processing (NLP). The goals of the workshop were to answer questions like: How can we enhance author network analysis and Bibliometrics using data obtained by text analytics? What insights can NLP provide on the structure of scientific writing, on citation networks, and on in-text citation analysis? This workshop is the first step to foster the reflection on the interdisciplinarity and the benefits that the two disciplines Bibliometrics and Natural Language Processing can drive from it.Comment: 4 pages, Workshop on Mining Scientific Papers: Computational Linguistics and Bibliometrics at ISSI 201
    corecore