2,713 research outputs found

    UCL (University College London) Libraries Masterplan: Library Report to Estates Management Committee January 2008

    Get PDF
    This document is a Report from UCL Library Services to UCL on Master Planning activities and outputs which have been undertaken to quantify use and development of estate in UCL Library Services. Prioritised options have been identified for the UCL Main and Science Libraries, and for a new central site option. This work has also addressed the needs of UCL for long-term offsite storage, which concludes that UCL needs to retain its facility at Wickford for at least the next ten years

    Improving quality assessment of composite indicators in university rankings: a case study of French and German universities of excellence

    Get PDF
    Composite indicators play an essential role for benchmarking higher education institutions. One of the main sources of uncertainty building composite indicators and, undoubtedly, the most debated problem in building composite indicators is the weighting schemes (assigning weights to the simple indicators or subindicators) together with the aggregation schemes (final composite indicator formula). Except the ideal situation where weights are provided by the theory, there clearly is a need for improving quality assessment of the final rank linked with a fixed vector of weights. We propose to use simulation techniques to generate random perturbations around any initial vector of weights to obtain robust and reliable ranks allowing to rank universities in a range bracket. The proposed methodology is general enough to be applied no matter the weighting scheme used for the composite indicator. The immediate benefit achieved is a reduction of the uncertainty associated with the assessment of a specific rank which is not representative of the real performance of the university, and an improvement of the quality assessment of composite indicators used to rank. To illustrate the proposed methodology we rank the French and the German universities involved in their respective 2008 Excellence Initiatives.Composite indicators, Rankings, Benchmarking, Higher education institutions, Weighting schemes, Simulation techniques

    Greater Philadelphia's Knowledge Industry: Leveraging the Region's Colleges and Universities in the New Economy

    Get PDF
    This report documents Greater Philadelphia's current standing as a knowledge region, compares its performance over a series of key indicators to the largest American knowledge regions, identifies activities being undertaken around the country, and offers a set of strategic recommendations for better linking the region's knowledge assets to economic development

    Is one business and management school better than another? A clustering perspective across UK national HE league tables

    Get PDF
    Higher education (HE) league table rankings have been widely adopted and used by stakeholders such as students and HE institution managers. Nevertheless, criticisms have been raised by researchers. The present study proposes that rather than using league table indicators to make a single, standardised ranking list, indicators of the existing three UK national league tables indicators could be used to form clusters based on the homogeneity of the characteristics of each business and management school. Six groups of business and management schools were extracted and characterised. The approach has removed the notion of saying that one business and management school is better than another (single ranking). Findings offer stakeholders a clearer view of business and management school by identifying groups that best represent the UK business and management school focus

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    Defining typologies of universities through a DEA-MDS analysis: An institutional characterization for formative evaluation purposes

    Get PDF
    Universities are organizational structures with individual activity mixes or strategies that lead to different performance levels by mission. Evaluation techniques based on performance indicators or rankings risk rewarding just a specific type of university and undermining university diversification: they usually introduce homogenizing pressures and risk displacing university objectives¿neglecting their socio-economic contribution and focusing on succeeding on the evaluation system. In this study, we propose an alternative evaluation method that overcomes these limitations. We produce a multidimensional descriptive classification of universities into typologies, while analysing the relation between their institutional factors (characteristics) and their (technical) efficiency performance from a descriptive perspective. To do so we apply bootstrap data envelopment analysis (DEA) and multidimensional scaling analysis (MDS), performing a so-called DEA-MDS analysis on data n the Spanish university system, and unlike previous studies, we include data on an important dimension of the third mission of universities (specifically knowledge transfer, KT) in their characterization. We identify six types of (homogeneous) universities. Results indicate that to be fairly efficient, universities may focus on teaching, KT, or overall efficiency but always have to fairly perform in research. Additionally, results confirm the relevance of the third mission as a source of institutional diversity in higher education. This approach could be used to address an alternative evaluation methodology for higher education institutions with formative purposes, evaluating universities according to their unique characteristics for the improvement of HE systems

    Improving quality assessment of composite indicators in university rankings: a case study of French and German universities of excellence

    Get PDF
    Composite indicators play an essential role for benchmarking higher education institutions. One of the main sources of uncertainty building composite indicators and, undoubtedly, the most debated problem in building composite indicators is the weighting schemes (assigning weights to the simple indicators or subindicators) together with the aggregation schemes (final composite indicator formula). Except the ideal situation where weights are provided by the theory, there clearly is a need for improving quality assessment of the final rank linked with a fixed vector of weights. We propose to use simulation techniques to generate random perturbations around any initial vector of weights to obtain robust and reliable ranks allowing to rank universities in a range bracket. The proposed methodology is general enough to be applied no matter the weighting scheme used for the composite indicator. The immediate benefit achieved is a reduction of the uncertainty associated with the assessment of a specific rank which is not representative of the real performance of the university, and an improvement of the quality assessment of composite indicators used to rank. To illustrate the proposed methodology we rank the French and the German universities involved in their respective 2008 Excellence Initiatives
    • …
    corecore