5 research outputs found
Evaluation of unique identifiers used as keys to match identical publications in Pure and SciVal:a case study from health science
Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication. The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis
I am charging my battery, now I am full of Energy. I am the Robot
Erfaringer med chatbot-teknologi. Det Kgl. Biblioteks centrale kontaktservice ”Spørg biblioteket” har det seneste år været suppleret med en virtuel assistent
Applying the Leiden Manifesto principles in practice:commonalities and differences in interpretation
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px 'Times New Roman'}
The Leiden Manifesto (LM) is changing how we think about and use metrics [1]. Bibliometric evaluation is explained as a combination of quantitative and qualitative methods, allowing the use of different metrics, disciplinary knowledge and research performance strategies. Both bibliometricians and consumers of bibliometrics are encouraged to communicate and use the LM principles to acknowledge what they know and do not know, what is measured and what is not measured, thus legitimizing the use of the metrics.
However, in our previous study, we observed that it is unclear how the LM principles should be interpreted [2, 3]. We suspect that subjective interpretations of the principles do not correlate. To investigate the reliability and validity of the LM, the present study presents a systematic review of bibliometric reports that apply the LM principles. Reports are retrieved from the LM blog [4], Scopus, Web of Science and Google Scholar. Each principle and its interpretation is coded in NVivo, whereafter we explore the degree of agreement in the interpretations across the reports.
We find that for some principles, e.g. principle 1, the interpretations are well aligned. For other principles, e.g. principle 3, the interpretations differ but may be seen as complementary. We also observe that interpretations can overlap and thus the redundancy of the principles needs to be further investigated, e.g. principle 3 and 6.
We conclude that at least for some of the LM principles, the reliability appears weak as the range of interpretations are wide, however complementary. Furthermore, some of the interpretations are applied for more principles, which may point to weak validity.
Further research on the reliability and the validity of the LM will be essential to establish guidance in implementing the LM in practice. </p
Evaluation of unique identifiers used for citation linking [version 1; referees: 1 approved, 2 approved with reservations]
Unique identifiers (UID) are seen as an effective tool to create links between identical publications in databases or identify duplicates in a database. The purpose of the present study is to investigate how well UIDs work for citation linking. We have two objectives: Explore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key. Illustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication. The objectives are addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis
Bottom-up implementation of Leiden Manifesto
<div><i>Introduction</i></div><div>Leiden Manifesto (LM) is ten principles to guide quantitative research evaluation.1 Universities have launched policies on responsible metrics inspired by LM.2 As supplement or alternative to these top-down implementations, we explore a bottom-up approach.</div><div><br></div><div><i>Objective</i></div><div>A sound bibliometric analysis that meets the requests from clients, e.g. management and researchers at a university, can be a demanding task. The foci of bibliometricians and clients do not always overlap.3,4 We test if LM can be used as a “consumer label” on a bibliometric analysis, providing clients with information about the contents and facilitating responsible use. The goal is to improve how applied bibliometrics are conducted and used. </div><div><br></div><div><i>Methodology</i></div><div>We select two typical cases from Copenhagen University Library Bibliometric Service. The first is a bibliometric analysis of the research from a department at the university. The second is a calculation of an h-index for a research proposal. LM is used as a “consumer label” for both analyses. </div><div><br></div><div><i>Selected results</i></div><div>Based on the two cases, we see the following potentials in using the ten LM principles as a “consumer label” on a bibliometric analysis.</div><div>-Evaluation of own practice. E.g. bibliometricians could do more to ensure that those evaluated verify and thus legitimize the analysis (Principle 5). </div><div>-Advise clients on use of the analysis. E.g. remind clients that not all research activities and publications are covered and how this can affect the results and use of the analysis (Principle 3).</div><div>-Emphasize division of responsibilities. E.g. it is the responsibility of the client to supply the research mission and of the bibliometrician to select appropriate indicators (Principle 2).3</div><div><br></div><div><b>References</b></div><div><br></div><div>1.Hicks, D., Wouters, P., Waltman, L., de Rijcke, S. & Rafols, I. Bibliometrics: The Leiden Manifest for Research Metrics. Nature 520, 429–431 (2015).</div><div>2.Hicks, D., Wouters, P., Waltman, L., de Rijcke, S. & Rafols, I. Leiden Manifesto for research metrics: Blog. Available at: <a href='http://www.leidenmanifesto.org/blog"'>http://www.leidenmanifesto.org/blog</a>. (Accessed: 6th August 2017)</div><div>3.Wildgaard, L., Andersern, J. P., Larsen, K. S., Price, A. & Gauffriau, M. Can we implement the Leiden Manifesto principles in our daily work with research indicators? (2016).</div><div>4.Hammarfelt, B. & Rushforth, A. D. Indicators as judgment devices: Citizen bibliometrics in biomedicine and economics. (2016).</div><div><br></div