33,232 research outputs found

    Research assessment in the humanities: problems and challenges

    Get PDF
    Research assessment is going to play a new role in the governance of universities and research institutions. Evaluation of results is evolving from a simple tool for resource allocation towards policy design. In this respect "measuring" implies a different approach to quantitative aspects as well as to an estimation of qualitative criteria that are difficult to define. Bibliometrics became so popular, in spite of its limits, just offering a simple solution to complex problems. The theory behind it is not so robust but available results confirm this method as a reasonable trade off between costs and benefits. Indeed there are some fields of science where quantitative indicators are very difficult to apply due to the lack of databases and data, in few words the credibility of existing information. Humanities and social sciences (HSS) need a coherent methodology to assess research outputs but current projects are not very convincing. The possibility of creating a shared ranking of journals by the value of their contents at either institutional, national or European level is not enough as it is raising the same bias as in the hard sciences and it does not solve the problem of the various types of outputs and the different, much longer time of creation and dissemination. The web (and web 2.0) represents a revolution in the communication of research results mainly in the HSS, and also their evaluation has to take into account this change. Furthermore, the increase of open access initiatives (green and gold road) offers a large quantity of transparent, verifiable data structured according to international standards that allow comparability beyond national limits and above all is independent from commercial agents. The pilot scheme carried out at the university of Milan for the Faculty of Humanities demonstrated that it is possible to build quantitative, on average more robust indicators, that could provide a proxy of research production and productiivity even in the HSS

    The Open Research Web: A Preview of the Optimal and the Inevitable

    Get PDF
    The multiple online research impact metrics we are developing will allow the rich new database , the Research Web, to be navigated, analyzed, mined and evaluated in powerful new ways that were not even conceivable in the paper era – nor even in the online era, until the database and the tools became openly accessible for online use by all: by researchers, research institutions, research funders, teachers, students, and even by the general public that funds the research and for whose benefit it is being conducted: Which research is being used most? By whom? Which research is growing most quickly? In what direction? under whose influence? Which research is showing immediate short-term usefulness, which shows delayed, longer term usefulness, and which has sustained long-lasting impact? Which research and researchers are the most authoritative? Whose research is most using this authoritative research, and whose research is the authoritative research using? Which are the best pointers (“hubs”) to the authoritative research? Is there any way to predict what research will have later citation impact (based on its earlier download impact), so junior researchers can be given resources before their work has had a chance to make itself felt through citations? Can research trends and directions be predicted from the online database? Can text content be used to find and compare related research, for influence, overlap, direction? Can a layman, unfamiliar with the specialized content of a field, be guided to the most relevant and important work? These are just a sample of the new online-age questions that the Open Research Web will begin to answer

    Development of Computer Science Disciplines - A Social Network Analysis Approach

    Full text link
    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investigate the collaborative and citation behavior of journals/conferences by analyzing the properties of their co-authorship and citation subgraphs. The paper draws several important conclusions. First, conferences constitute social structures that shape the computer science knowledge. Second, computer science is becoming more interdisciplinary. Third, experts are the key success factor for sustainability of journals/conferences

    How journal rankings can suppress interdisciplinary research. A comparison between Innovation Studies and Business & Management

    Get PDF
    This study provides quantitative evidence on how the use of journal rankings can disadvantage interdisciplinary research in research evaluations. Using publication and citation data, it compares the degree of interdisciplinarity and the research performance of a number of Innovation Studies units with that of leading Business & Management schools in the UK. On the basis of various mappings and metrics, this study shows that: (i) Innovation Studies units are consistently more interdisciplinary in their research than Business & Management schools; (ii) the top journals in the Association of Business Schools' rankings span a less diverse set of disciplines than lower-ranked journals; (iii) this results in a more favourable assessment of the performance of Business & Management schools, which are more disciplinary-focused. This citation-based analysis challenges the journal ranking-based assessment. In short, the investigation illustrates how ostensibly 'excellence-based' journal rankings exhibit a systematic bias in favour of mono-disciplinary research. The paper concludes with a discussion of implications of these phenomena, in particular how the bias is likely to affect negatively the evaluation and associated financial resourcing of interdisciplinary research organisations, and may result in researchers becoming more compliant with disciplinary authority over time.Comment: 41 pages, 10 figure

    Crossing the hurdle: the determinants of individual scientific performance

    Get PDF
    An original cross sectional dataset referring to a medium sized Italian university is implemented in order to analyze the determinants of scientific research production at individual level. The dataset includes 942 permanent researchers of various scientific sectors for a three year time span (2008 - 2010). Three different indicators - based on the number of publications or citations - are considered as response variables. The corresponding distributions are highly skewed and display an excess of zero - valued observations. In this setting, the goodness of fit of several Poisson mixture regression models are explored by assuming an extensive set of explanatory variables. As to the personal observable characteristics of the researchers, the results emphasize the age effect and the gender productivity gap, as previously documented by existing studies. Analogously, the analysis confirm that productivity is strongly affected by the publication and citation practices adopted in different scientific disciplines. The empirical evidence on the connection between teaching and research activities suggests that no univocal substitution or complementarity thesis can be claimed: a major teaching load does not affect the odds to be a non-active researcher and does not significantly reduce the number of publications for active researchers. In addition, new evidence emerges on the effect of researchers administrative tasks, which seem to be negatively related with researcher's productivity, and on the composition of departments. Researchers' productivity is apparently enhanced by operating in department filled with more administrative and technical staff, and it is not significantly affected by the composition of the department in terms of senior or junior researchers.Comment: Revised version accepted for publication by Scientometric

    “Excellence R Us”: university research and the fetishisation of excellence

    Get PDF
    The rhetoric of “excellence” is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organisations, from art history to zoology. But does “excellence” actually mean anything? Does this pervasive narrative of “excellence” do any good? Drawing on a range of sources we interrogate “excellence” as a concept and find that it has no intrinsic meaning in academia. Rather it functions as a linguistic interchange mechanism. To investigate whether this linguistic function is useful we examine how the rhetoric of excellence combines with narratives of scarcity and competition to show that the hypercompetition that arises from the performance of “excellence” is completely at odds with the qualities of good research. We trace the roots of issues in reproducibility, fraud, and homophily to this rhetoric. But we also show that this rhetoric is an internal, and not primarily an external, imposition. We conclude by proposing an alternative rhetoric based on soundness and capacity-building. In the final analysis, it turns out that that “excellence” is not excellent. Used in its current unqualified form it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship

    How to Create an Innovation Accelerator

    Full text link
    Too many policy failures are fundamentally failures of knowledge. This has become particularly apparent during the recent financial and economic crisis, which is questioning the validity of mainstream scholarly paradigms. We propose to pursue a multi-disciplinary approach and to establish new institutional settings which remove or reduce obstacles impeding efficient knowledge creation. We provided suggestions on (i) how to modernize and improve the academic publication system, and (ii) how to support scientific coordination, communication, and co-creation in large-scale multi-disciplinary projects. Both constitute important elements of what we envision to be a novel ICT infrastructure called "Innovation Accelerator" or "Knowledge Accelerator".Comment: 32 pages, Visioneer White Paper, see http://www.visioneer.ethz.c
    • …
    corecore