4,077 research outputs found

    Assessing the Benefits of Public Research Within an Economic Framework: The Case of USDA's Agricultural Research Service

    Get PDF
    Evaluation of publicly funded research can help provide accountability and prioritize programs. In addition, Federal intramural research planning generally involves an institutional assessment of the appropriate Federal role, if any, and whether the research should be left to others, such as universities or the private sector. Many methods of evaluation are available, peer review—used primarily for establishing scientific merit—being the most common. Economic analysis focuses on quantifying ultimate research outcomes, whether measured in goods with market prices or in nonmarket goods such as environmental quality or human health. However, standard economic techniques may not be amenable for evaluating some important public research priorities or for institutional assessments. This report reviews quantitative methods and applies qualitative economic reasoning and stakeholder interviewing methods to the evaluation of economic benefits of Federal intramural research using three case studies of research conducted by USDA’s Agricultural Research Service (ARS). Differences among the case studies highlight the need to select suitable assessment techniques from available methodologies, the limited scope for comparing assessment results across programs, and the inherent difficulty in quantifying benefits in some research areas. When measurement and attribution issues make it difficult to quantify these benefits, the report discusses how qualitative insights based on economic concepts can help research prioritization.Agricultural Research Service, Federal intramural research, publicly funded research, Environmental Economics and Policy, Food Consumption/Nutrition/Food Safety, Livestock Production/Industries, Productivity Analysis,

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    A Review of Theory and Practice in Scientometrics

    Get PDF
    Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the “laws" of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments

    Throwing Out the Baby with the Bathwater: The Undesirable Effects of National Research Assessment Exercises on Research

    Get PDF
    The evaluation of the quality of research at a national level has become increasingly common. The UK has been at the forefront of this trend having undertaken many assessments since 1986, the latest being the “Research Excellence Framework” in 2014. The argument of this paper is that, whatever the intended results in terms of evaluating and improving research, there have been many, presumably unintended, results that are highly undesirable for research and the university community more generally. We situate our analysis using Bourdieu’s theory of cultural reproduction and then focus on the peculiarities of the 2008 RAE and the 2014 REF the rules of which allowed for, and indeed encouraged, significant game-playing on the part of striving universities. We conclude with practical recommendations to maintain the general intention of research assessment without the undesirable side-effects

    Applied Evaluative Informetrics: Part 1

    Full text link
    This manuscript is a preprint version of Part 1 (General Introduction and Synopsis) of the book Applied Evaluative Informetrics, to be published by Springer in the summer of 2017. This book presents an introduction to the field of applied evaluative informetrics, and is written for interested scholars and students from all domains of science and scholarship. It sketches the field's history, recent achievements, and its potential and limits. It explains the notion of multi-dimensional research performance, and discusses the pros and cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In addition, it presents quantitative research assessment as an evaluation science, and focuses on the role of extra-informetric factors in the development of indicators, and on the policy context of their application. It also discusses the way forward, both for users and for developers of informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General Introduction and Synopsis) of a book entitled Applied Evaluative Bibliometrics, to be published by Springer in the summer of 201
    • …
    corecore