5 research outputs found

    Credit allocation based on journal impact factor and coauthorship contribution

    Get PDF
    Some research institutions demand researchers to distribute the incomes they earn from publishing papers to their researchers and/or co-authors. In this study, we deal with the Impact Factor-based ranking journal as a criteria for the correct distribution of these incomes. We also include the Authorship Credit factor for distribution of the incomes among authors, using the geometric progression of Cantor's theory and the Harmonic Credit Index. Depending on the ranking of the journal, the proposed model develops a proper publication credit allocation among all authors. Moreover, our tool can be deployed in the evaluation of an institution for a funding program, as well as calculating the amounts necessary to incentivize research among personnel.Comment: 9 pages; 3 figures; 2 table

    Credit Allocation Based on Journal Impact Factor and Co-authorship Contribution

    Get PDF
    Abstract. Some research institutions demand researchers to distribute the incomes they earn from publishing papers to their researchers and/or co-authors. In this study, we deal with the Impact Factor-based ranking journal as a criteria for the correct distribution of these incomes. We also include the Authorship Credit factor for distribution of the incomes among authors, using the geometric progression of Cantor’s theory and the Harmonic Credit Index. Depending on the ranking of the journal, the proposed model develops a proper publication credit allocation among all authors. Moreover, our tool can be deployed in the evaluation of an institution for a funding program, as well as calculating the amounts necessary to incentivize research among personnel.Keywords. Co-author credit; Impact factor; Ranking; Cantor’s succession; Harmonic credit.JEL. A12, C02, C10

    Quantifying Success in Science: An Overview

    Get PDF
    Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions

    Technological cycles, Meta-Ranking and Open Access Performance

    Get PDF
    This thesis consists of three essays, linked by innovation, classification and change. In the first paper, I analyze a theoretical problem regarding the reemergence and affirmation of a technological paradigm over the others; in the second article, I propose a framework to aggregate journal rankings and classify academic journals; in the third essay I analyze the performance of Open Access journals, considered an innovative form of publishing, with the aim of identifying the main features of top-rated ones. More specifically, the first essay deals on the technological life cycle which explains how the battles between competing technologies sooner or later end with the dominance of one over the others, or, under certain conditions, with their coexistence. However, the practice points out that, sometimes, beaten technologies can re-emerge in the market. Firms dealing with technology investment decisions need to completely understand the competing technologies dynamics, because the emergence of an alternative and potentially superior technology does not necessarily mean the failure of the incumbent, and different scenario would be traced. Starting from the analysis of the microprocessor market and considering the relationships with complementary companies, I show how the battle for dominance between two rival technologies can be reopened with a new era of ferment. While factors of dominance have been explored by a great amount of literature, little has been said on this question. In particular, I find a non-conventional S-curve trend and I seek to explicate its managerial implication.The second chapter deals with ranking academic journals, an issue that during the years received several contribute from literature of Business and Management [DuBois and Reeb, 2000, Franke et al, 1990, Serenko and Bontis, 2004, Tüselmann et al, 2015, Werner, 2002]. Ranking journals is a longstanding problem and can be addressed quantitatively, qualitatively or using a combination of both approaches. In the last decades, the Impact Factor (i.e., the most known quantitative approach) has been widely questioned, and other indices have thus been developed and become popular. Previous studies have reported strengths and weaknesses of each index, and devised meta-indices to rank journals in a certain field of study. However, the proposed meta-indices exhibit some intrinsic limitations: (i) the indices to be combined are not always chosen according to well-grounded principles; (ii) combination methods are usually unweighted; and (iii) some of the proposed meta-indices are parametric, which requires assuming a specific underlying data distribution. I propose a data-driven methodology that linearly combines an arbitrary number of indices to produce an aggregated ranking, using different learning techniques to estimate the combining weights. I am also able to measure correlations and distances between indices and meta-indices in a vector space, to quantitatively evaluate their differences. The goal of the third essay, is to identify the features of top-rated gold open access (OA) journals by testing seven main variables: languages, countries, years of activity and years in the DOAJ repository, publication fee, the field of study, whether the journal has been launched as OA or converted, and the type of publisher. A sample of 1,910 gold OA journals has been obtained by combining SCImago Journal & Country Rank (SJR) 2012, the DOAJ, and data provided by previous studies [Solomon, 2013]. I have divided the SJR index into quartiles for all journals' subject areas. First, I show descriptive statistics by combining quartiles based on their features. Then, after having converted the quartiles into a dummy variable, I test it as a dependent variable and in a binary logistic regression. This work contributes empirically to better understanding the gold OA efficacy, which may be helpful in improving journals' rankings in the areas where this is still a struggle. Significant results have been found for all variables, except for the types of publishers, and for born or converted journals

    Technological cycles, Meta-Ranking and Open Access Performance

    Get PDF
    This thesis consists of three essays, linked by innovation, classification and change. In the first paper, I analyze a theoretical problem regarding the reemergence and affirmation of a technological paradigm over the others; in the second article, I propose a framework to aggregate journal rankings and classify academic journals; in the third essay I analyze the performance of Open Access journals, considered an innovative form of publishing, with the aim of identifying the main features of top-rated ones. More specifically, the first essay deals on the technological life cycle which explains how the battles between competing technologies sooner or later end with the dominance of one over the others, or, under certain conditions, with their coexistence. However, the practice points out that, sometimes, beaten technologies can re-emerge in the market. Firms dealing with technology investment decisions need to completely understand the competing technologies dynamics, because the emergence of an alternative and potentially superior technology does not necessarily mean the failure of the incumbent, and different scenario would be traced. Starting from the analysis of the microprocessor market and considering the relationships with complementary companies, I show how the battle for dominance between two rival technologies can be reopened with a new era of ferment. While factors of dominance have been explored by a great amount of literature, little has been said on this question. In particular, I find a non-conventional S-curve trend and I seek to explicate its managerial implication.The second chapter deals with ranking academic journals, an issue that during the years received several contribute from literature of Business and Management [DuBois and Reeb, 2000, Franke et al, 1990, Serenko and Bontis, 2004, Tüselmann et al, 2015, Werner, 2002]. Ranking journals is a longstanding problem and can be addressed quantitatively, qualitatively or using a combination of both approaches. In the last decades, the Impact Factor (i.e., the most known quantitative approach) has been widely questioned, and other indices have thus been developed and become popular. Previous studies have reported strengths and weaknesses of each index, and devised meta-indices to rank journals in a certain field of study. However, the proposed meta-indices exhibit some intrinsic limitations: (i) the indices to be combined are not always chosen according to well-grounded principles; (ii) combination methods are usually unweighted; and (iii) some of the proposed meta-indices are parametric, which requires assuming a specific underlying data distribution. I propose a data-driven methodology that linearly combines an arbitrary number of indices to produce an aggregated ranking, using different learning techniques to estimate the combining weights. I am also able to measure correlations and distances between indices and meta-indices in a vector space, to quantitatively evaluate their differences. The goal of the third essay, is to identify the features of top-rated gold open access (OA) journals by testing seven main variables: languages, countries, years of activity and years in the DOAJ repository, publication fee, the field of study, whether the journal has been launched as OA or converted, and the type of publisher. A sample of 1,910 gold OA journals has been obtained by combining SCImago Journal & Country Rank (SJR) 2012, the DOAJ, and data provided by previous studies [Solomon, 2013]. I have divided the SJR index into quartiles for all journals' subject areas. First, I show descriptive statistics by combining quartiles based on their features. Then, after having converted the quartiles into a dummy variable, I test it as a dependent variable and in a binary logistic regression. This work contributes empirically to better understanding the gold OA efficacy, which may be helpful in improving journals' rankings in the areas where this is still a struggle. Significant results have been found for all variables, except for the types of publishers, and for born or converted journals
    corecore