66 research outputs found

    An index to quantify an individual's scientific research output that takes into account the effect of multiple coauthorship

    Full text link
    I propose the index â„Ź\hbar ("hbar"), defined as the number of papers of an individual that have citation count larger than or equal to the â„Ź\hbar of all coauthors of each paper, as a useful index to characterize the scientific output of a researcher that takes into account the effect of multiple coauthorship. The bar is higher for â„Ź\hbar.Comment: A few minor changes from v1. To be published in Scientometric

    A review of the literature on citation impact indicators

    Full text link
    Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research

    The urgent need for modification of scientific ranking indexes to facilitate scientific progress and diminish academic bullying

    Get PDF
    Academic bullying occurs when senior scientists direct abusive behavior such as verbal insults, public shaming, isolation, and threatening toward vulnerable junior colleagues such as postdocs, graduate students and lab members. We believe that one root cause of bullying behavior is the pressure felt by scientists to compete for rankings designed to measure their scientific worth. These ratings, such as the h-index, have several unintended consequences, one of which we believe is academic bullying. Under pressure to achieve higher and higher rankings, in exchange for positive evaluations, grants and recognition, senior scientists exert undue pressure on their junior staff in the form of bullying. Lab members have little or no recourse due to the lack of fair institutional protocols for investigating bullying, dependence on grant or institutional funding, fear of losing time and empirical work by changing labs, and vulnerability to visa cancellation threats among international students. We call for institutions to reconsider their dependence on these over-simplified surrogates for real scientific progress and to provide fair and just protocols that will protect targets of academic bullying from emotional and financial distress

    Deep context of citations using machine‑learning models in scholarly full‑text articles

    Get PDF
    Information retrieval systems for scholarly literature rely heavily not only on text matching but on semantic- and context-based features. Readers nowadays are deeply interested in how important an article is, its purpose and how influential it is in follow-up research work. Numerous techniques to tap the power of machine learning and artificial intelligence have been developed to enhance retrieval of the most influential scientific literature. In this paper, we compare and improve on four existing state-of-the-art techniques designed to identify influential citations. We consider 450 citations from the Association for Computational Linguistics corpus, classified by experts as either important or unimportant, and further extract 64 features based on the methodology of four state-of-the-art techniques. We apply the Extra-Trees classifier to select 29 best features and apply the Random Forest and Support Vector Machine classifiers to all selected techniques. Using the Random Forest classifier, our supervised model improves on the state-of-the-art method by 11.25%, with 89% Precision-Recall area under the curve. Finally, we present our deep-learning model, the Long Short-Term Memory network, that uses all 64 features to distinguish important and unimportant citations with 92.57% accuracy

    Quantitative and qualitative analysis of editor behavior through potentially coercive citations

    Get PDF
    © 2017 by the authors. How much is the h-index of an editor of a well-ranked journal improved due to citations which occur after his/her appointment? Scientific recognition within academia is widely measured nowadays by the number of citations or h-index. Our dataset is based on a sample of four editors from a well-ranked journal (impact factor, IF, greater than 2). The target group consists of two editors who seem to benefit by their position through an increased citation number (and subsequently h-index) within the journal. The total amount of citations for the target group is greater than 600. The control group is formed by another set of two editors from the same journal whose relations between their positions and their citation records remain neutral. The total amount of citations for the control group is more than 1200. The timespan for which the citations' pattern has been studied is 1975-2015. Previous coercive citations for a journal's benefit (an increase of its IF) has been indicated. To the best of our knowledge, this is a pioneering work on coercive citations for personal (editors') benefit. Editorial teams should be aware about this type of potentially unethical behavior and act accordingly

    Using publication metrics to highlight academic productivity and research impact

    Get PDF
    This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output

    " Thou shalt not work alone ": Individual Productivity in Research and Co-authorship in economics

    No full text
    This paper focuses on the properties of the matching process which leads to scientific collaboration. In a first step, it proposes a simple theoretical model to describe the intertemporal choice of researchers facing successive opportunities of co-authoring papers. In a second part, the paper empirically assesses the properties of the model. The main empirical result is that the number and the productivity of a researcher's co-authors reflect the productivity of this researcher. This result is consistent with the assumption that co-authorship is motivated by a willingness to increase both the quality and the quantity of research output. As researchers with a lot of influent publications papers may create links with a large number of influential co-authors, co-authoring with highly productive academics appears as a signaling device of researchers' quality
    • …
    corecore