4,948 research outputs found

    Measuring co-authorship and networking-adjusted scientific impact

    Get PDF
    Appraisal of the scientific impact of researchers, teams and institutions with productivity and citation metrics has major repercussions. Funding and promotion of individuals and survival of teams and institutions depend on publications and citations. In this competitive environment, the number of authors per paper is increasing and apparently some co-authors don't satisfy authorship criteria. Listing of individual contributions is still sporadic and also open to manipulation. Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship. Here, I define I1 for a single scientist as the number of authors who appear in at least I1 papers of the specific scientist. For a group of scientists or institution, In is defined as the number of authors who appear in at least In papers that bear the affiliation of the group or institution. I1 depends on the number of papers authored Np. The power exponent R of the relationship between I1 and Np categorizes scientists as solitary (R>2.5), nuclear (R=2.25-2.5), networked (R=2-2.25), extensively networked (R=1.75-2) or collaborators (R<1.75). R may be used to adjust for co-authorship networking the citation impact of a scientist. In similarly provides a simple measure of the effective networking size to adjust the citation impact of groups or institutions. Empirical data are provided for single scientists and institutions for the proposed metrics. Cautious adoption of adjustments for co-authorship and networking in scientific appraisals may offer incentives for more accountable co-authorship behaviour in published articles.Comment: 25 pages, 5 figure

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    Prescriptions for Excellence in Health Care Summer 2012 Download Full PDF

    Get PDF

    Rehabilitation medicine summit: building research capacity Executive Summary

    Get PDF
    The general objective of the "Rehabilitation Medicine Summit: Building Research Capacity" was to advance and promote research in medical rehabilitation by making recommendations to expand research capacity. The five elements of research capacity that guided the discussions were: 1) researchers; 2) research culture, environment, and infrastructure; 3) funding; 4) partnerships; and 5) metrics. The 100 participants included representatives of professional organizations, consumer groups, academic departments, researchers, governmental funding agencies, and the private sector. The small group discussions and plenary sessions generated an array of problems, possible solutions, and recommended actions. A post-Summit, multi-organizational initiative is called to pursue the agendas outlined in this report (see Additional File 1)

    Just how difficult can it be counting up R&D funding for emerging technologies (and is tech mining with proxy measures going to be any better?)

    Get PDF
    Decision makers considering policy or strategy related to the development of emerging technologies expect high quality data on the support for different technological options. A natural starting point would be R&D funding statistics. This paper explores the limitations of such aggregated data in relation to the substance and quantification of funding for emerging technologies. Using biotechnology as an illustrative case, we test the utility of a novel taxonomy to demonstrate the endemic weaknesses in the availability and quality of data from public and private sources. Using the same taxonomy, we consider the extent to which tech-mining presents an alternative, or potentially complementary, way to determine support for emerging technologies using proxy measures such as patents and scientific publications

    How journal rankings can suppress interdisciplinary research. A comparison between Innovation Studies and Business & Management

    Get PDF
    This study provides quantitative evidence on how the use of journal rankings can disadvantage interdisciplinary research in research evaluations. Using publication and citation data, it compares the degree of interdisciplinarity and the research performance of a number of Innovation Studies units with that of leading Business & Management schools in the UK. On the basis of various mappings and metrics, this study shows that: (i) Innovation Studies units are consistently more interdisciplinary in their research than Business & Management schools; (ii) the top journals in the Association of Business Schools' rankings span a less diverse set of disciplines than lower-ranked journals; (iii) this results in a more favourable assessment of the performance of Business & Management schools, which are more disciplinary-focused. This citation-based analysis challenges the journal ranking-based assessment. In short, the investigation illustrates how ostensibly 'excellence-based' journal rankings exhibit a systematic bias in favour of mono-disciplinary research. The paper concludes with a discussion of implications of these phenomena, in particular how the bias is likely to affect negatively the evaluation and associated financial resourcing of interdisciplinary research organisations, and may result in researchers becoming more compliant with disciplinary authority over time.Comment: 41 pages, 10 figure

    Improving clinical research and cancer care delivery in community settings: evaluating the NCI community cancer centers program

    Get PDF
    Abstract Background In this article, we describe the National Cancer Institute (NCI) Community Cancer Centers Program (NCCCP) pilot and the evaluation designed to assess its role, function, and relevance to the NCI's research mission. In doing so, we describe the evolution of and rationale for the NCCCP concept, participating sites' characteristics, its multi-faceted aims to enhance clinical research and quality of care in community settings, and the role of strategic partnerships, both within and outside of the NCCCP network, in achieving program objectives. Discussion The evaluation of the NCCCP is conceptualized as a mixed method multi-layered assessment of organizational innovation and performance which includes mapping the evolution of site development as a means of understanding the inter- and intra-organizational change in the pilot, and the application of specific evaluation metrics for assessing the implementation, operations, and performance of the NCCCP pilot. The assessment of the cost of the pilot as an additional means of informing the longer-term feasibility and sustainability of the program is also discussed. Summary The NCCCP is a major systems-level set of organizational innovations to enhance clinical research and care delivery in diverse communities across the United States. Assessment of the extent to which the program achieves its aims will depend on a full understanding of how individual, organizational, and environmental factors align (or fail to align) to achieve these improvements, and at what cost

    From Scientific Discovery to Cures: Bright Stars within a Galaxy

    Get PDF
    We propose that data mining and network analysis utilizing public databases can identify and quantify relationships between scientific discoveries and major advances in medicine (cures). Further development of such approaches could help to increase public understanding and governmental support for life science research and could enhance decision making in the quest for cures
    corecore