12,363 research outputs found

    Trends in Russian research output indexed in Scopus and Web of Science

    Full text link
    Trends are analysed in the annual number of documents published by Russian institutions and indexed in Scopus and Web of Science, giving special attention to the time period starting in the year 2013 in which the Project 5-100 was launched by the Russian Government. Numbers are broken down by document type, publication language, type of source, research discipline, country and source. It is concluded that Russian publication counts strongly depend upon the database used, and upon changes in database coverage, and that one should be cautious when using indicators derived from WoS, and especially from Scopus, as tools in the measurement of research performance and international orientation of the Russian science system.Comment: Author copy of a manuscript accepted for publication in the journal Scientometrics, May 201

    The structure of the Arts & Humanities Citation Index: A mapping on the basis of aggregated citations among 1,157 journals

    Full text link
    Using the Arts & Humanities Citation Index (A&HCI) 2008, we apply mapping techniques previously developed for mapping journal structures in the Science and Social Science Citation Indices. Citation relations among the 110,718 records were aggregated at the level of 1,157 journals specific to the A&HCI, and the journal structures are questioned on whether a cognitive structure can be reconstructed and visualized. Both cosine-normalization (bottom up) and factor analysis (top down) suggest a division into approximately twelve subsets. The relations among these subsets are explored using various visualization techniques. However, we were not able to retrieve this structure using the ISI Subject Categories, including the 25 categories which are specific to the A&HCI. We discuss options for validation such as against the categories of the Humanities Indicators of the American Academy of Arts and Sciences, the panel structure of the European Reference Index for the Humanities (ERIH), and compare our results with the curriculum organization of the Humanities Section of the College of Letters and Sciences of UCLA as an example of institutional organization

    Evaluating a Departmentā€™s Research: Testing the Leiden Methodology in Business and Management

    Get PDF
    The Leiden methodology (LM), also sometimes called the ā€œcrown indicatorā€, is a quantitative method for evaluating the research quality of a research group or academic department based on the citations received by the group in comparison to averages for the field. There have been a number of applications but these have mainly been in the hard sciences where the data on citations, provided by the ISI Web of Science (WoS), is more reliable. In the social sciences, including business and management, many journals and books are not included within WoS and so the LM has not been tested here. In this research study the LM has been applied on a dataset of over 3000 research publications from three UK business schools. The results show that the LM does indeed discriminate between the schools, and has a degree of concordance with other forms of evaluation, but that there are significant limitations and problems within this discipline

    Bibliographic Control of Serial Publications

    Get PDF
    An important problem with serials is bibliographic control. What good does it do for libraries to select, acquire, record, catalog, and bind large holdings of serial publications if the contents of those serials remain a mystery to all except the few who have the opportunity to examine selected journals of continuing personal interest and have discovered some magic way of retaining the gist of the contents? Bibliographic control is the indexing and abstracting of the contents or guts of what is included in the serials. It is this control, provided by secondary publishing services, which this article will discuss. Just as there are problems with serials in general, there are some easily identifiable problems connected with their bibliographic control including: volume, overlap, costs, elements and methods, and a few other miscellaneous considerations. Some history of bibliographic control will also put the current problems in a helpful perspective. Hereafter "bibliographic control" will be designated by the term "abstracting and indexing," one of these alone, or the shorter "a & i." (I do distinguish between abstracting and indexing and believe that they are not in order of importance and difficulty.) Although a & i do provide bibliographic control, this paper will not discuss cataloging, tables of contents, back-of-the-book indexes, year-end indexes, cumulative indexes, lists of advertisers, or bibliographies. If there is to be control, there must always be indexing. Abstracting is a short cut, a convenience, and perhaps a bibliographic luxury which may be now, or is fast becoming, too rich, in light of other factors to be discussed, for library blood and for the users of libraries especially for the users of indexes who may not depend upon the library interface. Abstracting, though, provides a desirable control, and one which will continue to be advocated.published or submitted for publicatio

    The journal coverage of Web of Science and Scopus : a comparative analysis

    Get PDF
    Bibliometric methods are used in multiple fields for a variety of purposes, namely for research evaluation. Most bibliometric analyses have in common their data sources: Thomson Reutersā€™ Web of Science (WoS) and Elsevierā€™s Scopus. The objective of this research is to describe the journal coverage of those two databases and to assess whether some field, publishing country and language are over or underrepresented. To do this we compared the coverage of active scholarly journals in WoS (13,605 journals) and Scopus (20,346 journals) with Ulrichā€™s extensive periodical directory (63,013 journals). Results indicate that the use of either WoS or Scopus for research evaluation may introduce biases that favor Natural Sciences and Engineering as well as Biomedical Research to the detriment of Social Sciences and Arts and Humanities. Similarly, English-language journals are overrepresented to the detriment of other languages. While both databases share these biases, their coverage differs substantially. As a consequence, the results of bibliometric analyses may vary depending on the database used. These results imply that in the context of comparative research evaluation, WoS and Scopus should be used with caution, especially when comparing different fields, institutions, countries or languages. The bibliometric community should continue its efforts to develop methods and indicators that include scientific output that are not covered in WoS or Scopus, such as field-specific and national citation indexes

    A review of the literature on citation impact indicators

    Full text link
    Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research

    Benchmarking citation measures among the Australian education professoriate

    Get PDF
    Individual researchers and the organisations for which they work are interested in comparative measures of research performance for a variety of purposes. Such comparisons are facilitated by quantifiable measures that are easily obtained and offer convenience and a sense of objectivity. One popular measure is the Journal Impact Factor based on citation rates but it is a measure intended for journals rather than individuals. Moreover, educational research publications are not well represented in the databases most widely used for calculation of citation measures leading to doubts about the usefulness of such measures in education. Newer measures and data sources offer alternatives that provide wider representation of education research. However, research has shown that citation rates vary according to discipline and valid comparisons depend upon the availability of discipline specific benchmarks. This study sought to provide such benchmarks for Australian educational researchers based on analysis of citation measures obtained for the Australian education professoriate
    • ā€¦
    corecore