96 research outputs found

    Peer assessment or promotion by numbers? A comparative study of different measures of researcher performance within the UK Library and Information Science research community

    Get PDF
    Hirsch’s h-index, Egghe’s g-index, total citation and publication counts, and five proposed new metrics were correlated with one another using Spearman’s Rank Correlation for one hundred randomly selected academics and researchers working in UK Library and Information Science departments. Metrics were compared for individuals of different genders and at institutions awarded different RAE (2001) grades. Individuals’ metrics were rank-correlated against academic ranks and RAE (2001) grades of their employing departments. Metrics calculated using Web of Science and Google Scholar data were compared. Peer- and h-index metric-ranked orders of researchers were rank-correlated. Citation behaviour and attitudes towards peer and citation-based assessment of 263 academics and researchers were investigated by factor analysis of online attitudinal survey responses. h increased curvilinearly with total citation and publication counts, suggesting that h was constrained by the activity in the field preventing individuals producing enough heavily cited publications to increase their h-index scores. Most individuals therefore shared similar h-index scores, making interpersonal comparisons difficult. Total citation counts and Bihui’s a-index scores distinguished between more individuals, though whether they could confidently identify differences between individuals is uncertain. Both databases arbitrarily omitted individuals and publications, systematically biasing citation metrics calculated using them. In contrast to studies of larger fields, no citation metrics correlated with RAE grade, academic rank, or direct peer-assessment, suggesting that citation-based assessment is unsuitable for research fields with relatively little research activity. No gender bias was evident in academic rank, esteem or citedness. At least nine independent factors influence citation behaviour. Mertonian factors dominated. The independence of the factors suggested different individuals have different combinations of non-Mertonian motivations. The overriding meaning of citations was confirmed as signals of relevance and reward. Recommendations for future research include a need to develop simple, robust methods to identify subfields and normalise citations across subfields, to quantify the impact of random bias and to determine whether it varies across subfields, and to study the rate of accumulation of citations and citation distribution changes for individuals (and departments) over time to determine whether career age can be controlled for, in particular

    Citation Analysis: A Comparison of Google Scholar, Scopus, and Web of Science

    Get PDF
    When faculty members are evaluated, they are judged in part by the impact and quality of their scholarly publications. While all academic institutions look to publication counts and venues as well as the subjective opinions of peers, many hiring, tenure, and promotion committees also rely on citation analysis to obtain a more objective assessment of an author’s work. Consequently, faculty members try to identify as many citations to their published works as possible to provide a comprehensive assessment of their publication impact on the scholarly and professional communities. The Institute for Scientific Information’s (ISI) citation databases, which are widely used as a starting point if not the only source for locating citations, have several limitations that may leave gaps in the coverage of citations to an author’s work. This paper presents a case study comparing citations found in Scopus and Google Scholar with those found in Web of Science (the portal used to search the three ISI citation databases) for items published by two Library and Information Science full-time faculty members. In addition, the paper presents a brief overview of a prototype system called CiteSearch, which analyzes combined data from multiple citation databases to produce citation-based quality evaluation measures

    Webometric analysis of departments of librarianship and information science: a follow-up study

    Get PDF
    This paper reports an analysis of the websites of UK departments of library and information science. Inlink counts of these websites revealed no statistically significant correlation with the quality of the research carried out by these departments, as quantified using departmental grades in the 2001 Research Assessment Exercise and citations in Google Scholar to publications submitted for that Exercise. Reasons for this lack of correlation include: difficulties in disambiguating departmental websites from larger institutional structures; the relatively small amount of research-related material in departmental websites; and limitations in the ways that current Web search engines process linkages to URLs. It is concluded that departmental-level webometric analyses do not at present provide an appropriate technique for evaluating academic research quality, and, more generally, that standards are needed for the formatting of URLs if inlinks are to become firmly established as a tool for website analysis

    Detecting h-index manipulation through self-citation analysis

    Get PDF
    The h-index has received an enormous attention for being an indicator that measures the quality of researchers and organizations. We investigate to what degree authors can inflate their h-index through strategic self-citations with the help of a simulation. We extended Burrell’s publication model with a procedure for placing self-citations, following three different strategies: random self-citation, recent self-citations and h-manipulating self-citations. The results show that authors can considerably inflate their h-index through self-citations. We propose the q-index as an indicator for how strategically an author has placed self-citations, and which serves as a tool to detect possible manipulation of the h-index. The results also show that the best strategy for an high h-index is publishing papers that are highly cited by others. The productivity has also a positive effect on the h-index

    Ranking of library and information science researchers: Comparison of data sources for correlating citation data, and expert judgments

    Get PDF
    This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). Forty-two LIS experts provided judgments on a 5-point scale of the quality of research published by 101 scholars; the median rankings resulting from these judgments were then correlated with h-, g- and H-index values computed using three different sources of citation data: Web of Science (WoS), Scopus and Google Scholar (GS). The two variants of the basic h-index correlated more strongly with peer judgment than did the h-index itself; citation data from Scopus was more strongly correlated with the expert judgments than was data from GS, which in turn was more strongly correlated than data from WoS; correlations from a carefully cleaned version of GS data were little different from those obtained using swiftly gathered GS data; the indices from the citation databases resulted in broadly similar rankings of the LIS academics; GS disadvantaged researchers in bibliometrics compared to the other two citation database while WoS disadvantaged researchers in the more technical aspects of information retrieval; and experts from the UK and other European countries rated UK academics with higher scores than did experts from the USA. (C) 2010 Elsevier Ltd. All rights reserved

    Towards a procedure for survey item selection in MIS

    Get PDF
    management information systems. Scholars can choose to develop survey items themselves or reuse survey items from existing studies. When survey items are reused without justification, the connection between theory and measurement is not made explicit. We present a procedure that facilitates selection and justification. Our procedure covers the selection of survey items, when one could choose from a pool of existing items and the justification of the selected set of survey items. We draw upon bibliometric theory and develop a rating that combines the relevance of the paper that presents the reused survey items, with the relevance of the survey item sources. We demonstrate our procedure by operationalizing sub-constructs of information technology capability and address potential concerns of our procedure. Our study provides an initial step towards a procedure that facilitates scholars with selecting and justifying their survey items, while at the same time, provides insight for peers regarding the connection between theory and measurement

    The Open Research Web: A Preview of the Optimal and the Inevitable

    Get PDF
    The multiple online research impact metrics we are developing will allow the rich new database , the Research Web, to be navigated, analyzed, mined and evaluated in powerful new ways that were not even conceivable in the paper era – nor even in the online era, until the database and the tools became openly accessible for online use by all: by researchers, research institutions, research funders, teachers, students, and even by the general public that funds the research and for whose benefit it is being conducted: Which research is being used most? By whom? Which research is growing most quickly? In what direction? under whose influence? Which research is showing immediate short-term usefulness, which shows delayed, longer term usefulness, and which has sustained long-lasting impact? Which research and researchers are the most authoritative? Whose research is most using this authoritative research, and whose research is the authoritative research using? Which are the best pointers (“hubs”) to the authoritative research? Is there any way to predict what research will have later citation impact (based on its earlier download impact), so junior researchers can be given resources before their work has had a chance to make itself felt through citations? Can research trends and directions be predicted from the online database? Can text content be used to find and compare related research, for influence, overlap, direction? Can a layman, unfamiliar with the specialized content of a field, be guided to the most relevant and important work? These are just a sample of the new online-age questions that the Open Research Web will begin to answer
    corecore