7 research outputs found

    Relationship among research collaboration, number of documents and number of citations. A case study in Spanish computer science production in 2000-2009.

    Get PDF
    This paper analyzes the relationship among research collaboration, number of documents and number of citations of computer science research activity. It analyzes the number of documents and citations and how they vary by number of authors. They are also analyzed (according to author set cardinality) under different circumstances, that is, when documents are written in different types of collaboration, when documents are published in different document types, when documents are published in different computer science subdisciplines, and, finally, when documents are published by journals with different impact factor quartiles. To investigate the above relationships, this paper analyzes the publications listed in the Web of Science and produced by active Spanish university professors between 2000 and 2009, working in the computer science field. Analyzing all documents, we show that the highest percentage of documents are published by three authors, whereas single-authored documents account for the lowest percentage. By number of citations, there is no positive association between the author cardinality and citation impact. Statistical tests show that documents written by two authors receive more citations per document and year than documents published by more authors. In contrast, results do not show statistically significant differences between documents published by two authors and one author. The research findings suggest that international collaboration results on average in publications with higher citation rates than national and institutional collaborations. We also find differences regarding citation rates between journals and conferences, across different computer science subdisciplines and journal quartiles as expected. Finally, our impression is that the collaborative level (number of authors per document) will increase in the coming years, and documents published by three or four authors will be the trend in computer science literature

    Peer-selected "best papers" - are they really that "good"?

    Get PDF
    Background Peer evaluation is the cornerstone of science evaluation. In this paper, we analyze whether or not a form of peer evaluation, the pre-publication selection of the best papers in Computer Science (CS) conferences, is better than random, when considering future citations received by the papers. Methods Considering 12 conferences (for several years), we collected the citation counts from Scopus for both the best papers and the non-best papers. For a different set of 17 conferences, we collected the data from Google Scholar. For each data set, we computed the proportion of cases whereby the best paper has more citations. We also compare this proportion for years before 2010 and after to evaluate if there is a propaganda effect. Finally, we count the proportion of best papers that are in the top 10% and 20% most cited for each conference instance. Results The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited. Discussion There is strong evidence that the selection of best papers in Computer Science conferences is better than a random selection, and that a significant number of the best papers are among the top cited papers in the conference.Peer evaluation is the cornerstone of science evaluation. In this paper, we analyze whether or not a form of peer evaluation, the pre-publication selection of the best papers in Computer Science (CS) conferences, is better than random, when considering fu103112sem informaçãosem informaçã

    The Rating Dilemma of Academic Management Journals: Attuning the Perceptions of Peer Rating

    Get PDF
    The adoption of journal lists as proxies to scholarship quality has sparked an ongoing debate among academics over what is meant by quality, how it is perceived by the reviewers, and the thresholds for the rating, inclusion, or exclusion of journals from these lists. Given the insufficient transparencies in the processes of journal quality evaluation when composing such lists, this research explores the use of the revealed preference approach to attune the ratings in both the Australian Business Deans Council Journal Quality List and Academic Journal Guide, and approximate the rating of management journals if they were to be considered for inclusion in either of the two aforementioned lists

    White Paper: Measuring Research Outputs Through Bibliometrics

    Get PDF
    The suggested citation for this white paper is: University of Waterloo Working Group on Bibliometrics, Winter 2016. White Paper: Measuring Research Outputs through Bibliometrics, Waterloo, Ontario: University of Waterloo.This White Paper provides a high-level review of issues relevant to understanding bibliometrics, and practical recommendations for how to appropriately use these measures. This is not a policy paper; instead, it defines and summarizes evidence that addresses appropriate use of bibliometric analysis at the University of Waterloo. Issues identified and recommendations will generally apply to other academic institutions. Understanding the types of bibliometric measures and their limitations makes it possible to identify both appropriate uses and crucial limitations of bibliometric analysis. Recommendations offered at the end of this paper provide a range of opportunities for how researchers and administrators at Waterloo and beyond can integrate bibliometric analysis into their practice

    Metrics for openness

    Get PDF
    The characterization of scholarly communication is dominated by citation-based measures. In this paper we propose several metrics to describe different facets of open access and open research. We discuss measures to represent the public availability of articles along with their archival location, licenses, access costs, and supporting information. Calculations illustrating these new metrics are presented using the authors’ publications. We argue that explicit measurement of openness is necessary for a holistic description of research outputs

    Invisible work in standard bibliometric evaluation of computer science

    No full text
    Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)545141146Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP
    corecore