1,051 research outputs found
Statistical Significance and Effect Sizes of Differences among Research Universities at the Level of Nations and Worldwide based on the Leiden Rankings
The Leiden Rankings can be used for grouping research universities by considering universities which are not significantly different as a homogeneous set. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called âexcellence indicatorâ PPtop-10%âthe proportion of the top-10% most-highly-cited papers assigned to a universityâwe pursue a classification using (i) overlapping stability intervals, (ii) statistical-significance tests, and (iii) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, the resulting classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w <.2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. For practical purposes, our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities
Factors Influencing Cities' Publishing Efficiency
Recently, a vast number of scientific publications have been produced in
cities in emerging countries. It has long been observed that the publication
output of Beijing has exceeded that of any other city in the world, including
such leading centres of science as Boston, New York, London, Paris, and Tokyo.
Researchers have suggested that, instead of focusing on cities' total
publication output, the quality of the output in terms of the number of highly
cited papers should be examined. However, in the period from 2014 to 2016,
Beijing produced as many highly cited papers as Boston, London, or New York. In
this paper, I propose another method to measure cities' publishing performance;
I focus on cities' publishing efficiency (i.e., the ratio of highly cited
articles to all articles produced in that city). First, I rank 554 cities based
on their publishing efficiency, then I reveal some general factors influencing
cities' publishing efficiency. The general factors examined in this paper are
as follows: the linguistic environment, cities' economic development level, the
location of excellent organisations, cities' international collaboration
patterns, and the productivity of scientific disciplines
A heuristic approach based on Leiden rankings to identify outliers: evidence from Italian universities in the european landscape
We propose an innovative use of the Leiden Rankings (LR) in institutional management. Although LR only consider research output of major universities reported in Web of Science (WOS) and share the limitations of other existing rankings, we show that they can be used as a base of a heuristic approach to identify âoutlyingâ institutions that perform significantly below or above expectations. Our approach is a non-rigorous intuitive method (âheuristicâ) because is affected by all the biases due to the technical choices and incompleteness that affect the LR but offers the possibility to discover interesting findings to be systematically verified later. We propose to use LR as a departure base on which to apply statistical analysis and network mapping to identify âoutlierâ institutions to be analyzed in detail as case studies. Outliers can inform and guide science policies about alternative options. Analyzing the publications of the Politecnico di Bari in more detail, we observe that âsmall teamsâ led by young and promising scholars can push the performance of a university up to the top of the LR. As argued by Moed (Applied evaluative informetrics. Springer International Publishing, Berlin, 2017a), supporting âemerging teamsâ, can provide an alternative to research support policies, adopted to encourage virtuous behaviours and best practices in research. The results obtained by this heuristic approach need further verification and systematic analysis but may stimulate further studies and insights on the topics of university rankings policy, institutional management, dynamics of teams, good research practice and alternative funding methods
The influence of higher education ranking systems : an institutional leadership perspective
Abstract : Competition between universities has intensified with the rise and expansion of Higher Education Ranking Systems (HERS). Many researchers agree that the HERS, and the publication of annual rankings, has influenced all participating institutions to some extent (Espeland & Sauder, 2015; Hazelkorn & Ryan, 2013; Rauvargers, 2013). This study was designed to investigate these influences as perceived by institutional leaders. The objectives of the study were to identify the various influences HERS exert on universities, and compares the extent to which institutional leaders from South Africa, South East Asia, Australia and the Arab Gulf experience these influences. The literature review includes discussions on the flow of international higher education, global phenomena like internationalisation, marketisation and an increased demand for higher education, and how these contributed to the development of HERS. The literature review contains an in-depth analysis of the big-three rankings (QS WUR, THE WUR and the Shanghai Ranking ARWU), and a discussion on the economic, cultural and political push and pull of the global knowledge economy. To identify and compare the influences of HERS on universities, the researcher employed a sequential mixed method study design, opting to conduct a qualitative exploration prior to a quantitative examination. The qualitative phase involved interviews with 25 institutional leaders to identify the numerous ranking-related influences on universities. The researcher employed two cycles of emergent coding to uncover the themes and categories within the interviews. In the second phase of the study, the themes and categories informed the development of a 65-item questionnaire to test the emergent aspects on a wider audience (86 international respondents). The questionnaire results confirmed the majority of the items underpinning the themes and categories. The third phase employs a mixture of quantitative and qualitative information to compare experiences from institutional leaders in South Africa, Arabian Gulf, Australia and South East Asia. The outcomes were presented in four exemplar case studies, featuring the results of nonparametric statistical analyses (Kruskal Wallis and Dunn Bonferonni), regional-specific comments and contextual literature...Ph.D. (Education Leadership and Management
White Paper: Measuring Research Outputs Through Bibliometrics
The suggested citation for this white paper is:
University of Waterloo Working Group on Bibliometrics, Winter 2016. White Paper: Measuring Research Outputs through Bibliometrics, Waterloo, Ontario: University of Waterloo.This White Paper provides a high-level review of issues relevant to understanding bibliometrics, and practical recommendations for how to appropriately use these measures. This is not a policy paper; instead, it defines and summarizes evidence that addresses appropriate use of bibliometric analysis at the University of Waterloo. Issues identified and recommendations will generally apply to other academic institutions. Understanding the types of bibliometric measures and their limitations makes it possible to identify both appropriate uses and crucial limitations of bibliometric analysis. Recommendations offered at the end of this paper provide a range of opportunities for how researchers and administrators at Waterloo and beyond can integrate bibliometric analysis into their practice
Congress UPV Proceedings of the 21ST International Conference on Science and Technology Indicators
This is the book of proceedings of the 21st Science and Technology Indicators Conference that took place
in València (Spain) from 14th to 16th of September 2016.
The conference theme for this year, âPeripheries, frontiers and beyondâ aimed to study the development and
use of Science, Technology and Innovation indicators in spaces that have not been the focus of current indicator
development, for example, in the Global South, or the Social Sciences and Humanities.
The exploration to the margins and beyond proposed by the theme has brought to the STI Conference an
interesting array of new contributors from a variety of fields and geographies.
This yearâs conference had a record 382 registered participants from 40 different countries, including 23
European, 9 American, 4 Asia-Pacific, 4 Africa and Near East. About 26% of participants came from outside
of Europe.
There were also many participants (17%) from organisations outside academia including governments (8%),
businesses (5%), foundations (2%) and international organisations (2%). This is particularly important in a
field that is practice-oriented.
The chapters of the proceedings attest to the breadth of issues discussed. Infrastructure, benchmarking
and use of innovation indicators, societal impact and mission oriented-research, mobility and careers, social
sciences and the humanities, participation and culture, gender, and altmetrics, among others.
We hope that the diversity of this Conference has fostered productive dialogues and synergistic ideas and
made a contribution, small as it may be, to the development and use of indicators that, being more inclusive,
will foster a more inclusive and fair world
- âŚ