15,367 research outputs found
The Global Risks Report 2016, 11th Edition
Now in its 11th edition, The Global Risks Report 2016 draws attention to ways that global risks could evolve and interact in the next decade. The year 2016 marks a forceful departure from past findings, as the risks about which the Report has been warning over the past decade are starting to manifest themselves in new, sometimes unexpected ways and harm people, institutions and economies. Warming climate is likely to raise this year's temperature to 1° Celsius above the pre-industrial era, 60 million people, equivalent to the world's 24th largest country and largest number in recent history, are forcibly displaced, and crimes in cyberspace cost the global economy an estimated US$445 billion, higher than many economies' national incomes. In this context, the Reportcalls for action to build resilience – the "resilience imperative" – and identifies practical examples of how it could be done.The Report also steps back and explores how emerging global risks and major trends, such as climate change, the rise of cyber dependence and income and wealth disparity are impacting already-strained societies by highlighting three clusters of risks as Risks in Focus. As resilience building is helped by the ability to analyse global risks from the perspective of specific stakeholders, the Report also analyses the significance of global risks to the business community at a regional and country-level
A review of the characteristics of 108 author-level bibliometric indicators
An increasing demand for bibliometric assessment of individuals has led to a
growth of new bibliometric indicators as well as new variants or combinations
of established ones. The aim of this review is to contribute with objective
facts about the usefulness of bibliometric indicators of the effects of
publication activity at the individual level. This paper reviews 108 indicators
that can potentially be used to measure performance on the individual author
level, and examines the complexity of their calculations in relation to what
they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201
A review of the literature on citation impact indicators
Citation impact indicators nowadays play an important role in research
evaluation, and consequently these indicators have received a lot of attention
in the bibliometric and scientometric literature. This paper provides an
in-depth review of the literature on citation impact indicators. First, an
overview is given of the literature on bibliographic databases that can be used
to calculate citation impact indicators (Web of Science, Scopus, and Google
Scholar). Next, selected topics in the literature on citation impact indicators
are reviewed in detail. The first topic is the selection of publications and
citations to be included in the calculation of citation impact indicators. The
second topic is the normalization of citation impact indicators, in particular
normalization for field differences. Counting methods for dealing with
co-authored publications are the third topic, and citation impact indicators
for journals are the last topic. The paper concludes by offering some
recommendations for future research
Field-normalized citation impact indicators and the choice of an appropriate counting method
Bibliometric studies often rely on field-normalized citation impact
indicators in order to make comparisons between scientific fields. We discuss
the connection between field normalization and the choice of a counting method
for handling publications with multiple co-authors. Our focus is on the choice
between full counting and fractional counting. Based on an extensive
theoretical and empirical analysis, we argue that properly field-normalized
results cannot be obtained when full counting is used. Fractional counting does
provide results that are properly field normalized. We therefore recommend the
use of fractional counting in bibliometric studies that require field
normalization, especially in studies at the level of countries and research
organizations. We also compare different variants of fractional counting. In
general, it seems best to use either the author-level or the address-level
variant of fractional counting
Recommended from our members
Measuring What’s Valued Or Valuing What’s Measured? Knowledge Production and the Research Assessment Exercise
Power is everywhere. But what is it and how does it infuse personal and institutional relationships in higher education? Power, Knowledge and the Academy: The Institutional is Political takes a close-up and critical look at both the elusive and blatant workings and consequences of power in a range of everyday sites in universities. Authors work with multi-layered conceptions of power to disturb the idea of the academy as a haven of detached reason and instead reveal the ways in which power shapes personal and institutional relationships, the production of knowledge and the construction of academic careers. Chapters focus on, among other areas, student-supervisor relationships, personal PhD journeys, power in research teams, networking, the Research Assessment Exercise in the UK, and the power to construct knowledge in literature reviews.
This chapter does not address which mechanism of research assessment provides a more truthful account of the value of a set of ‘research outputs’. Instead, it focuses on the power of any such mechanism to reinforce particular values and to inscribe hierarchies regarding knowledge. Regardless of what replaces it, the UK's RAE will have been productive, not just reflective of academic values. Some of the negative consequences of the RAE for UK academic life are considered, focusing on the operation of power through processes of knowledge production
Quantifying Success in Science: An Overview
Quantifying success in science plays a key role in guiding funding
allocations, recruitment decisions, and rewards. Recently, a significant amount
of progresses have been made towards quantifying success in science. This lack
of detailed analysis and summary continues a practical issue. The literature
reports the factors influencing scholarly impact and evaluation methods and
indices aimed at overcoming this crucial weakness. We focus on categorizing and
reviewing the current development on evaluation indices of scholarly impact,
including paper impact, scholar impact, and journal impact. Besides, we
summarize the issues of existing evaluation methods and indices, investigate
the open issues and challenges, and provide possible solutions, including the
pattern of collaboration impact, unified evaluation standards, implicit success
factor mining, dynamic academic network embedding, and scholarly impact
inflation. This paper should help the researchers obtaining a broader
understanding of quantifying success in science, and identifying some potential
research directions
- …