620 research outputs found

    Vector valued information measures and integration with respect to fuzzy vector capacities

    Full text link
    [EN] Integration with respect to vector-valued fuzzy measures is used to define and study information measuring tools. Motivated by some current developments in Information Science, we apply the integration of scalar functions with respect to vector-valued fuzzy measures, also called vector capacities. Bartle-Dunford-Schwartz integration (for the additive case) and Choquet type integration (for the non-additive case) are considered, showing that these formalisms can be used to define and develop vector-valued impact measures. Examples related to existing bibliometric tools as well as to new measuring indices are given.The authors would like to thank both Prof. Dr. Olvido Delgado and the referee for their valuable comments and suggestions which helped to prepare the manuscript. The first author gratefully acknowledges the support of the Ministerio de Economia, Industria y Competitividad (Spain) under project MTM2016-77054-C2-1-P.Sánchez Pérez, EA.; Szwedek, R. (2019). Vector valued information measures and integration with respect to fuzzy vector capacities. Fuzzy Sets and Systems. 355:1-25. https://doi.org/10.1016/j.fss.2018.05.004S12535

    Vector-valued impact measures and generation of specific indexes for research assessment

    Full text link
    A mathematical structure for defining multi-valued bibliometric indices is provided with the aim of measuring the impact of general sources of information others than articles and journals-for example, repositories of datasets. The aim of the model is to use several scalar indices at the same time for giving a measure of the impact of a given source of information, that is, we construct vector valued indices. We use the properties of these vector valued indices in order to give a global answer to the problem of finding the optimal scalar index for measuring a particular aspect of the impact of an information source, depending on the criterion we want to fix for the evaluation of this impact. The main restrictions of our model are (1) it uses finite sets of scalar impact indices (altmetrics), and (2) these indices are assumed to be additive. The optimization procedure for finding the best tool for a fixed criterion is also presented. In particular, we show how to create an impact measure completely adapted to the policy of a specific research institution.Calabuig, JM.; Ferrer Sapena, A.; Sánchez Pérez, EA. (2016). Vector-valued impact measures and generation of specific indexes for research assessment. Scientometrics. 108(3):1425-1443. doi:10.1007/s11192-016-2039-6S142514431083Aleixandre Benavent, R., Valderrama Zurián, J. C., & González Alcaide, G. (2007). Scientific journals impact factor: Limitations and alternative indicators. El Profesional de la Información, 16(1), 4–11.Alguliyev, R., Aliguliyev, R. & Ismayilova, N. (2015). Weighted impact factor (WIF) for assessing the quality of scientific journals. arXiv:1506.02783Beauzamy, B. (1982). Introduction to Banach spaces and their geometry. Amsterdam: North-Holland.Beliakov, G., & James, S. (2011). Citation-based journal ranks: the use of fuzzy measures. Fuzzy Sets and Systems, 167, 101–119.Buela-Casal, G. (2003). Evaluating quality of articles and scientific journals. Proposal of weighted impact factor and a quality index. Psicothema, 15(1), 23–25.Diestel, J., & Uhl, J. J. (1977). Vector measures. Providence: Am. Math. Soc.Dorta-González, P., & Dorta-González, M. I. (2013). Comparing journals from different fields of science and social science through a JCR subject categories normalized impact factor. Scientometrics, 95(2), 645–672.Dorta-González, P., Dorta-González, M. I., Santos-Penate, D. R., & Suarez-Vega, R. (2014). Journal topic citation potential and between-field comparisons: The topic normalized impact factor. Journal of Informetrics, 8(2), 406–418.Egghe, L., & Rousseau, R. (2002). A general frame-work for relative impact indicators. Canadian Journal of Information and Library Science, 27(1), 29–48.Ferrer-Sapena, A., Sánchez-Pérez, E. A., González, L. M., Peset, F. & Aleixandre-Benavent, R. (2016). The impact factor as a measuring tool of the prestige of the journals in research assessment in mathematics. Research Evaluation, 1–9. doi: 10.1093/reseval/rvv041 .Ferrer-Sapena, A., Sánchez-Pérez, E. A., González, L. M., Peset, F., & Aleixandre-Benavent, R. (2015). Mathematical properties of weighted impact factors based on measures of prestige of the citing journals. Scientometrics, 105(3), 2089–2108.Gagolewski, M., & Mesiar, R. (2014). Monotone measures and universal integrals in a uniform framework for the scientific impact assessment problem. Information Sciences, 263, 166–174.Habibzadeh, F., & Yadollahie, M. (2008). Journal weighted impact factor: A proposal. Journal of Informetrics, 2(2), 164–172.Klement, E., Mesiar, R., & Pap, E. (2010). A universal integral as common frame for Choquet and Sugeno integral. IEEE Transactions on Fuzzy Systems, 18, 178–187.Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61, 2365–2369.Li, Y. R., Radicchi, F., Castellano, C., & Ruiz-Castillo, J. (2013). Quantitative evaluation of alternative field normalization procedures. Journal of Informetrics, 7(3), 746–755.Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4, 265–277.Owlia, P., Vasei, M., Goliaei, B., & Nassiri, I. (2011). Normalized impact factor (NIF): An adjusted method for calculating the citation rate of biomedical journals. Journal of Biomedical Informatics, 44(2), 216–220.Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.Piwowar, H. (2013). Altmetrics: Value all research products. Nature, 493(7431), 159–159.Pudovkin,A.I., & Garfield, E. (2004). Rank-normalized impact factor: A way to compare journal performance across subject categories. In Proceedings of the 67th annual meeting of the American Society for Information science and Technology, 41, 507-515.Rousseau, R. (2002). Journal evaluation: Technical and practical issues. Library Trends, 50(3), 418–439.Ruiz Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9, 102–117.Torra, V., & Narukawa, Y. (2008). The h-index and the number of citations: Two fuzzy integrals. IEEE Transactions on Fuzzy Systems, 16, 795–797.Waltman, L., & van Eck, N. J. (2008). Some comments on the journal weighted impact factor proposed by Habibzadeh and Yadollahie. Journal of Informetrics, 2(4), 369–372.Waltman, L., & van Eck, N. J. (2010). The relation between Eigenfactor, audience factor, and influence weight. Journal of the American Society for Information Science and Technology, 61, 1476–1486.Zahedi, Z., Costas, R., & Wouters, P. (2014). How well developed are altmetrics? A cross-disciplinary analysis of the presence of ’alternative metrics’ in scientific publications. Scientometrics, 101(2), 1491–1513.Zitt, M. (2010). Citing-side normalization of journal impact: A robust variant of the Audience Factor. Journal of Informetrics, 4(3), 392–406.Zitt, M. (2011). Behind citing-side normalization of citations: Some properties of the journal impact factor. Scientometrics, 89, 329–344.Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59, 1856–1860.Zyczkowski, K. (2010). Citation graph, weighted impact factors and performance indices. Scientometrics, 85(1), 301–315

    The use of fuzzy relations in the assessment of information resources producers' performance

    Get PDF
    Abstract. The producers assessment problem has many important practical instances: it is an abstract model for intelligent systems evaluating e.g. the quality of computer software repositories, web resources, social networking services, and digital libraries. Each producer's performance is determined according not only to the overall quality of the items he/she outputted, but also to the number of such items (which may be different for each agent). Recent theoretical results indicate that the use of aggregation operators in the process of ranking and evaluation producers may not necessarily lead to fair and plausible outcomes. Therefore, to overcome some weaknesses of the most often applied approach, in this preliminary study we encourage the use of a fuzzy preference relation-based setting and indicate why it may provide better control over the assessment process

    Universal Prediction

    Get PDF
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff (1964) and Levin (1970). This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam (1963) famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself

    Universal Prediction

    Get PDF
    In this dissertation I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn what there is to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also touches on modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a specific mathematical definition of a universal prediction method, that goes back to the early days of artificial intelligence and that has a direct line to modern developments in machine learning. This definition essentially aims to combine all possible prediction algorithms. An alternative interpretation is that this definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the proposed definition cannot be interpreted as a universal prediction method, as turns out to be exposed by a mathematical argument that it was actually intended to overcome. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is problematic itself

    Universal Prediction:A Philosophical Investigation

    Get PDF
    • …
    corecore