5,794 research outputs found

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    The insoluble problems of books: What does Altmetric.com have to offer?

    Get PDF
    Purpose: Analyze the capabilities, functionalities and appropriateness of Altmetric.com as a data source for the bibliometric analysis of books in comparison to PlumX. Methodology: We perform an exploratory analysis on the metrics the Altmetric Explorer for Institutions platform offers for books. We use two distinct datasets of books. On the one hand, we analyze the Book Collection included in Altmetric.com. On the other, we use Clarivate’s Master Book List, to analyze Altmetric.com’s capabilities to download and merge data with external databases. Finally, we compare our findings with those obtained in a previous study performed in PlumX. Findings: Altmetric.com combines and orderly tracks a set of data sources combined by DOI identifiers to retrieve metadata from books, being Google Books its main provider. It also retrieves information from commercial publishers and from some Open Access initiatives, including those led by university libraries such as Harvard Library. We find issues with linkages between records and mentions or ISBN discrepancies. Furthermore, we find that automatic bots affect greatly Wikipedia mentions to books. Our comparison with PlumX suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools. Practical implications: This study targets different audiences which can benefit from our findings. First, bibliometricians and researchers who seek for alternative sources to develop bibliometric analyses of books, with a special focus on the Social Sciences and Humanities fields. Second, librarians and research managers who are the main clients to which these tools are directed. Third, Altmetric.com itself as well as other altmetric providers who might get a better understanding of the limitations users encounter and improve this promising tool. Originality/value: This is the first study to analyze Altmetric.com’s functionalities and capabilities for providing metric data for books and to compare results from this platform, with those obtained via Plum

    DataCite as a novel bibliometric source: Coverage, strengths and limitations

    Get PDF
    This paper explores the characteristics of DataCite to determine its possibilities and potential as a new bibliometric data source to analyze the scholarly production of open data. Open science and the increasing data sharing requirements from governments, funding bodies, institutions and scientific journals has led to a pressing demand for the development of data metrics. As a very first step towards reliable data metrics, we need to better comprehend the limitations and caveats of the information provided by sources of open data. In this paper, we critically examine records downloaded from the DataCite's OAI API and elaborate a series of recommendations regarding the use of this source for bibliometric analyses of open data. We highlight issues related to metadata incompleteness, lack of standardization, and ambiguous definitions of several fields. Despite these limitations, we emphasize DataCite's value and potential to become one of the main sources for data metrics development.Comment: Paper accepted for publication in Journal of Informetric

    Scholarometer: A Social Framework for Analyzing Impact across Disciplines

    Get PDF
    The use of quantitative metrics to gauge the impact of scholarly publications, authors, and disciplines is predicated on the availability of reliable usage and annotation data. Citation and download counts are widely available from digital libraries. However, current annotation systems rely on proprietary labels, refer to journals but not articles or authors, and are manually curated. To address these limitations, we propose a social framework based on crowdsourced annotations of scholars, designed to keep up with the rapidly evolving disciplinary and interdisciplinary landscape. We describe a system called Scholarometer, which provides a service to scholars by computing citation-based impact measures. This creates an incentive for users to provide disciplinary annotations of authors, which in turn can be used to compute disciplinary metrics. We first present the system architecture and several heuristics to deal with noisy bibliographic and annotation data. We report on data sharing and interactive visualization services enabled by Scholarometer. Usage statistics, illustrating the data collected and shared through the framework, suggest that the proposed crowdsourcing approach can be successful. Secondly, we illustrate how the disciplinary bibliometric indicators elicited by Scholarometer allow us to implement for the first time a universal impact measure proposed in the literature. Our evaluation suggests that this metric provides an effective means for comparing scholarly impact across disciplinary boundaries. © 2012 Kaur et al

    How Librarians Can Help Improve Law Journal Publishing

    Get PDF
    Librarians are well positioned to improve law journal publishing and help it evolve in the ever-changing digital environment. They can provide student editors with advice on a variety of issues such as copyright, data preservation, and version control. Librarians can also help journals adopt technical standards and improve the discoverability and usability of journal content. While few libraries will be able to adopt all these suggestions, a checklist of ideas is provided to help librarians select those that are most suitable to their libraries and journals

    "I updated the <ref>": The evolution of references in the English Wikipedia and the implications for altmetrics

    Get PDF
    With this work, we present a publicly available data set of the history of all the references (more than 55 million) ever used in the English Wikipedia until June 2019. We have applied a new method for identifying and monitoring references in Wikipedia, so that for each reference we can provide data about associated actions: creation, modifications, deletions, and reinsertions. The high accuracy of this method and the resulting data set was confirmed via a comprehensive crowdworker labeling campaign. We use the data set to study the temporal evolution of Wikipedia references as well as users’ editing behavior. We find evidence of a mostly productive and continuous effort to improve the quality of references: There is a persistent increase of reference and document identifiers (DOI, PubMedID, PMC, ISBN, ISSN, ArXiv ID) and most of the reference curation work is done by registered humans (not bots or anonymous editors). We conclude that the evolution of Wikipedia references, including the dynamics of the community processes that tend to them, should be leveraged in the design of relevance indexes for altmetrics, and our data set can be pivotal for such an effort
    • …
    corecore