27,188 research outputs found

    Mathematical properties of weighted impact factors based on measures of prestige of the citing journals

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11192-015-1741-0An abstract construction for general weighted impact factors is introduced. We show that the classical weighted impact factors are particular cases of our model, but it can also be used for defining new impact measuring tools for other sources of information as repositories of datasets providing the mathematical support for a new family of altmet- rics. Our aim is to show the main mathematical properties of this class of impact measuring tools, that hold as consequences of their mathematical structure and does not depend on the definition of any given index nowadays in use. In order to show the power of our approach in a well-known setting, we apply our construction to analyze the stability of the ordering induced in a list of journals by the 2-year impact factor (IF2). We study the change of this ordering when the criterium to define it is given by the numerical value of a new weighted impact factor, in which IF2 is used for defining the weights. We prove that, if we assume that the weight associated to a citing journal increases with its IF2, then the ordering given in the list by the new weighted impact factor coincides with the order defined by the IF2. We give a quantitative bound for the errors committed. We also show two examples of weighted impact factors defined by weights associated to the prestige of the citing journal for the fields of MATHEMATICS and MEDICINE, GENERAL AND INTERNAL, checking if they satisfy the increasing behavior mentioned above.Ferrer Sapena, A.; Sánchez Pérez, EA.; González, LM.; Peset Mancebo, MF.; Aleixandre Benavent, R. (2015). Mathematical properties of weighted impact factors based on measures of prestige of the citing journals. Scientometrics. 105(3):2089-2108. https://doi.org/10.1007/s11192-015-1741-0S208921081053Ahlgren, P., & Waltman, L. (2014). The correlation between citation-based and expert-based assessments of publication channels: SNIP and SJR vs. Norwegian quality assessments. Journal of Informetrics, 8, 985–996.Aleixandre Benavent, R., Valderrama Zurián, J. C., & González Alcaide, G. (2007). Scientific journals impact factor: Limitations and alternative indicators. El Profesional de la Información, 16(1), 4–11.Altmann, K. G., & Gorman, G. E. (1998). The usefulness of impact factor in serial selection: A rank and mean analysis using ecology journals. Library Acquisitions-Practise and Theory, 22, 147–159.Arnold, D. N., & Fowler, K. K. (2011). Nefarious numbers. Notices of the American Mathematical Society, 58(3), 434–437.Beliakov, G., & James, S. (2012). Using linear programming for weights identification of generalized bonferroni means in R. In: Proceedings of MDAI 2012 modeling decisions for artificial intelligence. Lecture Notes in Computer Science, Vol. 7647, pp. 35–44.Beliakov, G., & James, S. (2011). Citation-based journal ranks: The use of fuzzy measures. Fuzzy Sets and Systems, 167, 101–119.Buela-Casal, G. (2003). Evaluating quality of articles and scientific journals. Proposal of weighted impact factor and a quality index. Psicothema, 15(1), 23–25.Dorta-Gonzalez, P., & Dorta-Gonzalez, M. I. (2013). Comparing journals from different fields of science and social science through a JCR subject categories normalized impact factor. Scientometrics, 95(2), 645–672.Dorta-Gonzalez, P., Dorta-Gonzalez, M. I., Santos-Penate, D. R., & Suarez-Vega, R. (2014). Journal topic citation potential and between-field comparisons: The topic normalized impact factor. Journal of Informetrics, 8(2), 406–418.Egghe, L., & Rousseau, R. (2002). A general frame-work for relative impact indicators. Canadian Journal of Information and Library Science, 27(1), 29–48.Gagolewski, M., & Mesiar, R. (2014). Monotone measures and universal integrals in a uniform framework for the scientific impact assessment problem. Information Sciences, 263, 166–174.Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA, 295(1), 90–93.Habibzadeh, F., & Yadollahie, M. (2008). Journal weighted impact factor: A proposal. Journal of Informetrics, 2(2), 164–172.Klement, E., Mesiar, R., & Pap, E. (2010). A universal integral as common frame for Choquet and Sugeno integral. IEEE Transaction on Fuzzy System, 18, 178–187.Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61, 2365–2369.Li, Y. R., Radicchi, F., Castellano, C., & Ruiz-Castillo, J. (2013). Quantitative evaluation of alternative field normalization procedures. Journal of Informetrics, 7(3), 746–755.Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4, 265–277.NISO. (2014). Alternative metrics initiative phase 1. White paper. http://www.niso.org/apps/group-public/download.php/13809/Altmetrics-project-phase1-white-paperOwlia, P., Vasei, M., Goliaei, B., & Nassiri, I. (2011). Normalized impact factor (NIF): An adjusted method for calculating the citation rate of biomedical journals. Journal of Biomedical Informatics, 44(2), 216–220.Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.Pinto, A. C., & Andrade, J. B. (1999). Impact factor of scientific journals: What is the meaning of this parameter? Quimica Nova, 22, 448–453.Raghunathan, M. S., & Srinivas, V. (2001). Significance of impact factor with regard to mathematics journals. Current Science, 80(5), 605.Ruiz Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9, 102–117.Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: A valid measure of journal quality? Journal of the Medical Library Association, 91, 42–46.Torra, V., & Narukawa, Y. (2008). The h-index and the number of citations: Two fuzzy integrals. IEEE Transaction on Fuzzy System, 16, 795–797.Torres-Salinas, D., & Jimenez-Contreras, E. (2010). Introduction and comparative study of the new scientific journals citation indicators in journal citation reports and scopus. El Profesional de la Información, 19, 201–207.Waltman, L., & van Eck, N. J. (2008). Some comments on the journal weighted impact factor proposed by Habibzadeh and Yadollahie. Journal of Informetrics, 2(4), 369–372.Waltman, L., van Eck, N. J., van Leeuwen, T. N., & Visser, M. S. (2013). Some modifications to the SNIP journal impact indicator. Journal of Informetrics, 7, 272–285.Zitt, M. (2011). Behind citing-side normalization of citations: some properties of the journal impact factor. Scientometrics, 89, 329–344.Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59, 1856–1860.Zyczkowski, K. (2010). Citation graph, weighted impact factors and performance indices. Scientometrics, 85(1), 301–315

    Vector-valued impact measures and generation of specific indexes for research assessment

    Full text link
    A mathematical structure for defining multi-valued bibliometric indices is provided with the aim of measuring the impact of general sources of information others than articles and journals-for example, repositories of datasets. The aim of the model is to use several scalar indices at the same time for giving a measure of the impact of a given source of information, that is, we construct vector valued indices. We use the properties of these vector valued indices in order to give a global answer to the problem of finding the optimal scalar index for measuring a particular aspect of the impact of an information source, depending on the criterion we want to fix for the evaluation of this impact. The main restrictions of our model are (1) it uses finite sets of scalar impact indices (altmetrics), and (2) these indices are assumed to be additive. The optimization procedure for finding the best tool for a fixed criterion is also presented. In particular, we show how to create an impact measure completely adapted to the policy of a specific research institution.Calabuig, JM.; Ferrer Sapena, A.; Sánchez Pérez, EA. (2016). Vector-valued impact measures and generation of specific indexes for research assessment. Scientometrics. 108(3):1425-1443. doi:10.1007/s11192-016-2039-6S142514431083Aleixandre Benavent, R., Valderrama Zurián, J. C., & González Alcaide, G. (2007). Scientific journals impact factor: Limitations and alternative indicators. El Profesional de la Información, 16(1), 4–11.Alguliyev, R., Aliguliyev, R. & Ismayilova, N. (2015). Weighted impact factor (WIF) for assessing the quality of scientific journals. arXiv:1506.02783Beauzamy, B. (1982). Introduction to Banach spaces and their geometry. Amsterdam: North-Holland.Beliakov, G., & James, S. (2011). Citation-based journal ranks: the use of fuzzy measures. Fuzzy Sets and Systems, 167, 101–119.Buela-Casal, G. (2003). Evaluating quality of articles and scientific journals. Proposal of weighted impact factor and a quality index. Psicothema, 15(1), 23–25.Diestel, J., & Uhl, J. J. (1977). Vector measures. Providence: Am. Math. Soc.Dorta-González, P., & Dorta-González, M. I. (2013). Comparing journals from different fields of science and social science through a JCR subject categories normalized impact factor. Scientometrics, 95(2), 645–672.Dorta-González, P., Dorta-González, M. I., Santos-Penate, D. R., & Suarez-Vega, R. (2014). Journal topic citation potential and between-field comparisons: The topic normalized impact factor. Journal of Informetrics, 8(2), 406–418.Egghe, L., & Rousseau, R. (2002). A general frame-work for relative impact indicators. Canadian Journal of Information and Library Science, 27(1), 29–48.Ferrer-Sapena, A., Sánchez-Pérez, E. A., González, L. M., Peset, F. & Aleixandre-Benavent, R. (2016). The impact factor as a measuring tool of the prestige of the journals in research assessment in mathematics. Research Evaluation, 1–9. doi: 10.1093/reseval/rvv041 .Ferrer-Sapena, A., Sánchez-Pérez, E. A., González, L. M., Peset, F., & Aleixandre-Benavent, R. (2015). Mathematical properties of weighted impact factors based on measures of prestige of the citing journals. Scientometrics, 105(3), 2089–2108.Gagolewski, M., & Mesiar, R. (2014). Monotone measures and universal integrals in a uniform framework for the scientific impact assessment problem. Information Sciences, 263, 166–174.Habibzadeh, F., & Yadollahie, M. (2008). Journal weighted impact factor: A proposal. Journal of Informetrics, 2(2), 164–172.Klement, E., Mesiar, R., & Pap, E. (2010). A universal integral as common frame for Choquet and Sugeno integral. IEEE Transactions on Fuzzy Systems, 18, 178–187.Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61, 2365–2369.Li, Y. R., Radicchi, F., Castellano, C., & Ruiz-Castillo, J. (2013). Quantitative evaluation of alternative field normalization procedures. Journal of Informetrics, 7(3), 746–755.Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4, 265–277.Owlia, P., Vasei, M., Goliaei, B., & Nassiri, I. (2011). Normalized impact factor (NIF): An adjusted method for calculating the citation rate of biomedical journals. Journal of Biomedical Informatics, 44(2), 216–220.Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.Piwowar, H. (2013). Altmetrics: Value all research products. Nature, 493(7431), 159–159.Pudovkin,A.I., & Garfield, E. (2004). Rank-normalized impact factor: A way to compare journal performance across subject categories. In Proceedings of the 67th annual meeting of the American Society for Information science and Technology, 41, 507-515.Rousseau, R. (2002). Journal evaluation: Technical and practical issues. Library Trends, 50(3), 418–439.Ruiz Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9, 102–117.Torra, V., & Narukawa, Y. (2008). The h-index and the number of citations: Two fuzzy integrals. IEEE Transactions on Fuzzy Systems, 16, 795–797.Waltman, L., & van Eck, N. J. (2008). Some comments on the journal weighted impact factor proposed by Habibzadeh and Yadollahie. Journal of Informetrics, 2(4), 369–372.Waltman, L., & van Eck, N. J. (2010). The relation between Eigenfactor, audience factor, and influence weight. Journal of the American Society for Information Science and Technology, 61, 1476–1486.Zahedi, Z., Costas, R., & Wouters, P. (2014). How well developed are altmetrics? A cross-disciplinary analysis of the presence of ’alternative metrics’ in scientific publications. Scientometrics, 101(2), 1491–1513.Zitt, M. (2010). Citing-side normalization of journal impact: A robust variant of the Audience Factor. Journal of Informetrics, 4(3), 392–406.Zitt, M. (2011). Behind citing-side normalization of citations: Some properties of the journal impact factor. Scientometrics, 89, 329–344.Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59, 1856–1860.Zyczkowski, K. (2010). Citation graph, weighted impact factors and performance indices. Scientometrics, 85(1), 301–315

    Factors predicting the scientific wealth of nations

    Get PDF
    It has been repeatedly demonstrated that economic affluence is one of the main predictors of the scientific wealth of nations. Yet, the link is not as straightforward as is often presented. First, only a limited set of relatively affluent countries is usually studied. Second, there are differences between equally rich countries in their scientific success. The main aim of the present study is to find out which factors can enhance or suppress the effect of the economic wealth of countries on their scientific success, as measured by the High Quality Science Index (HQSI). The HQSI is a composite indicator of scientific wealth, which in equal parts considers the mean citation rate per paper and the percentage of papers that have reached the top 1% of citations in the Essential Science Indicators (ESI; Clarivate Analytics) database during the 11-year period from 2008 to 2018. Our results show that a high position in the ranking of countries on the HQSI can be achieved not only by increasing the number of high-quality papers but also by reducing the number of papers that are able to pass ESI thresholds but are of lower quality. The HQSI was positively and significantly correlated with the countries’ economic indicators (as measured by gross national income and Research and Development expenditure as a percentage from GDP), but these correlations became insignificant when other societal factors were controlled for. Overall, our findings indicate that it is small and well-governed countries with a long-standing democratic past that seem to be more efficient in translating economic wealth into high-quality science

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    Culture in international business research: a bibliometric study in four top IB journals

    Get PDF
    Purpose – The purpose of this paper is to conduct a study on the articles published in the four top international business (IB) journals to examine how four cultural models and concepts – Hofstede’s (1980), Hall’s (1976), Trompenaars’s (1993) and Project GLOBE’s (House et al., 2004) – have been used in the extant published IB research. National cultures and cultural differences provide a crucial component of the context of IB research. Design/methodology – This is a bibliometric study on the articles published in four IB journals over the period from 1976 to 2010, examining a sample of 517 articles using citations and co-citation matrices. Findings – Examining this sample revealed interesting patterns of the connections across the studies. Hofstede’s (1980) and House et al.’s (2004) research on the cultural dimensions are the most cited and hold ties to a large variety of IB research. These findings point to a number of research avenues to deepen the understanding on how firms may handle different national cultures in the geographies they operate. Research limitations – Two main limitations are faced, one associated to the bibliometric method, citations and co-citations analyses and other to the delimitation of our sample to only four IB journals, albeit top-ranked. Originality/value – The paper focuses on the main cultural models used in IB research permitting to better understand how culture has been used in IB research, over an extended period.info:eu-repo/semantics/publishedVersio

    A Comparison between Two Main Academic Literature Collections: Web of Science and Scopus Databases

    Get PDF
    Nowadays, the world’s scientific community has been publishing an enormous number of papers in different scientific fields. In such environment, it is essential to know which databases are equally efficient and objective for literature searches. It seems that two most extensive databases are Web of Science and Scopus. Besides searching the literature, these two databases used to rank journals in terms of their productivity and the total citations received to indicate the journals impact, prestige or influence. This article attempts to provide a comprehensive comparison of these databases to answer frequent questions which researchers ask, such as: How Web of Science and Scopus are different? In which aspects these two databases are similar? Or, if the researchers are forced to choose one of them, which one should they prefer? For answering these questions, these two databases will be compared based on their qualitative and quantitative characteristics

    A Comparison between Two Main Academic Literature Collections: Web of Science and Scopus Databases

    Get PDF
    Nowadays, the world’s scientific community has been publishing an enormous number of papers in different scientific fields. In such environment, it is essential to know which databases are equally efficient and objective for literature searches. It seems that two most extensive databases are Web of Science and Scopus. Besides searching the literature, these two databases used to rank journals in terms of their productivity and the total citations received to indicate the journals impact, prestige or influence. This article attempts to provide a comprehensive comparison of these databases to answer frequent questions which researchers ask, such as: How Web of Science and Scopus are different? In which aspects these two databases are similar? Or, if the researchers are forced to choose one of them, which one should they prefer? For answering these questions, these two databases will be compared based on their qualitative and quantitative characteristics.Cite as: Aghaei Chadegani, A., Salehi, H., Yunus, M. M., Farhadi, H., Fooladi, M., Farhadi, M., & Ale Ebrahim, N. (2013). A Comparison between Two Main Academic Literature Collections: Web of Science and Scopus Databases. Asian Social Science, 9(5), 18-26. doi: 10.5539/ass.v9n5p1
    corecore