10 research outputs found

    Technological research in the EU is less efficient than the world average. EU research policy risks Europeans' future

    Get PDF
    We have studied the efficiency of research in the EU by a percentile-based citation approach that analyzes the distribution of country papers among the world papers. Going up in the citation scale, the frequency of papers from efficient countries increases while the frequency from inefficient countries decreases. In the percentile-based approach, this trend, which is permanent at any citation level, is measured by the ep index that equals the Ptop 1%/Ptop 10% ratio. By using the ep index we demonstrate that EU research on fast-evolving technological topics is less efficient than the world average and that the EU is far from being able to compete with the most advanced countries. The ep index also shows that the USA is well ahead of the EU in both fast- and slow-evolving technologies, which suggests that the advantage of the USA over the EU in innovation is due to low research efficiency in the EU. In accord with some previous studies, our results show that the European Commission's ongoing claims about the excellence of EU research are based on a wrong diagnosis. The EU must focus its research policy on the improvement of its inefficient research. Otherwise, the future of Europeans is at risk.Comment: 30 pages, 3 figures, 7 tables, in one single file. Version accepted in Journal of Informetric

    Common bibliometric approaches fail to assess correctly the number of important scientific advances for most countries and institutions

    Full text link
    Although not explicitly declared, most research rankings of countries and institutions are supposed to reveal their contribution to the advancement of knowledge. However, such advances are based on very highly cited publications with very low frequency, which can only very exceptionally be counted with statistical reliability. Percentile indicators enable calculations of the probability or frequency of such rare publications using counts of much more frequent publications; the general rule is that rankings based on the number of top 10% or 1% cited publications (Ptop 10%, Ptop 1%) will also be valid for the rare publications that push the boundaries of knowledge. Japan and its universities are exceptions, as their frequent Nobel Prizes contradicts their low Ptop 10% and Ptop 1%. We explain that this occurs because, in single research fields, the singularity of percentile indicators holds only for research groups that are homogeneous in their aims and efficiency. Correct calculations for ranking countries and institutions should add the results of their homogeneous groups, instead of considering all publications as a single set. Although based on Japan, our findings have a general character. Common predictions of scientific advances based on Ptop 10% might be severalfold lower than correct calculations.Comment: 30 pages, tables and figures embedded in a single pdf fil

    Uncertain research country rankings. Should we continue producing uncertain rankings?

    Full text link
    Citation based country rankings consistently categorize Japan as a developing country, even in those from the most reputed institutions. This categorization challenges the credibility of such rankings, considering Japan elevated scientific standing. In most cases, these rankings use percentile indicators and are accurate if country citations fit an ideal model of distribution, but they can be misleading in cases of deviations. The ideal model implies a lognormal citation distribution and a power law citation based double rank: in the global and country lists. This report conducts a systematic examination of deviations from the ideal model and their consequential impact on evaluations. The study evaluates six selected countries across three scientifically relevant topics and utilizes Leiden Ranking assessments of over 300 universities. The findings reveal three types of deviations from the lognormal citation distribution: i deviations in the extreme upper tail; ii inflated lower tails; and iii deflated lower part of the distributions. These deviations stem from structural differences among research systems that are prevalent and have the potential to mislead evaluations across all research levels. Consequently, reliable evaluations must consider these deviations. Otherwise, while some countries and institutions will be correctly evaluated, failure to identify deviations in each specific country or institution will render uncertain evaluations. For reliable assessments, future research evaluations of countries and institutions must identify deviations from the ideal model.Comment: 29 pages, 6 figures, 5 table

    Like-for-like bibliometric substitutes for peer review: Advantages and limits of indicators calculated from the e(p) index

    Get PDF
    The use of bibliometric indicators would simplify research assessments. The 2014 Research Excellence Framework (REF) is a peer review assessment of UK universities, whose results can be taken as benchmarks for bibliometric indicators. In this study, we use the REF results to investigate whether the e(p) index and a top percentile of most cited papers could substitute for peer review. The probability that a random university's paper reaches a certain top percentile in the global distribution of papers is a power of the e(p) index, which can be calculated from the citation-based distribution of university's papers in global top percentiles. Making use of the e(p) index in each university and research area, we calculated the ratios between the percentage of 4-star-rated outputs in REF and the percentages of papers in global top percentiles. Then, we fixed the assessment percentile so that the mean ratio between these two indicators across universities is 1.0. This method was applied to four units of assessment in REF: Chemistry, Economics and Econometrics joined to Business and Management Studies, and Physics. Some relevant deviations from the 1.0 ratio could be explained by the evaluation procedure in REF or by the characteristics of the research field; other deviations need specific studies by experts in the research area. These results indicate that in many research areas the substitution of a top percentile indicator for peer review is possible. However, this substitution cannot be made straightforwardly; more research is needed to establish the conditions of the bibliometric assessment

    The link between countries' economic and scientific wealth has a complex dependence on technological activity and research policy

    Get PDF
    We studied the research performance of 69 countries by considering two different types of new knowledge: incremental (normal) and fundamental (radical). In principle, these two types of new knowledge should be assessed at two very different levels of citations, but we demonstrate that a simpler assessment can be performed based on the total number of papers (P) and the ratio of the number of papers in the global top 10% of most cited papers divided to the total number of papers (P-top 10%/P). P represents the quantity, whereas the P-top 10%/P ratio represents the efficiency. In ideal countries, P and the P-top 10%/P ratio are linked to the gross domestic product (GDP) and GDP the per capita, respectively. Only countries with high P-top 10%/P ratios participate actively in the creation of fundamental new knowledge and have Noble laureates. In real countries, the link between economic and scientific wealth can be modified by the technological activity and the research policy. We discuss how technological activity may decrease the P-top 10%/P ratio while only slightly affecting the capacity to create fundamental new knowledge; in such countries, many papers may report incremental innovations that do not drive the advancement of knowledge. Japan is the clearest example of this, although there are many less extreme examples. Independently of technological activity, research policy has a strong influence on the P-top 10%/P ratio, which may be higher or lower than expected from the GDP per capita depending on the success of the research policy

    Spatial Mobility in Elite Academic Institutions in Economics : the Case of Spain

    Get PDF
    Using a dataset of 3,540 economists working in 2007 in 125 of the best academic centers in 22 countries, this paper presents some evidence on spatial mobility patterns in Spain and other countries conditional on some personal, department, and country characteristics. There are productivity and other reasons for designing a scientific policy with the aims of attracting foreign talent (brain gain), minimizing the elite brain drain, and recovering nationals who have earned a Ph.D. or have spent some time abroad (brain circulation). Our main result is that Spain has more brain gain, more brain circulation and less brain drain than comparable large, continental European countries, i.e. Germany, France, and Italy, where economists have similar opportunities for publishing their research in English or in their own languages. We suggest that these results can be mostly explained by the governance changes introduced in a number of Spanish institutions in 1975-1990 by a sizable contingent of Spanish economists coming back home after attending graduate school abroad. These initiatives were also favored by the availability of resources to finance certain research related activities, including international Ph.D. programs.This is the fourth version of a Working Paper in this series with the title “Governance, brain drain, and brain gain in elite academic institutions in economics. The case of Spain”, published in December 2017. Carrasco and Ruiz-Castillo acknowledge financial support from the Spanish MEC (Ministerio de Economía y Competitividad) through grants No. ECO2015-65204-P and ECO2014-55953-P, respectively, as well as grants MDM 2014-0431 from the MEC, and MadEco-CM (S2015/HUM-3444) from the Comunidad Autónoma de Madrid to their economics department
    corecore