84,677 research outputs found

    How to monitor sustainable mobility in cities? Literature review in the frame of creating a set of sustainable mobility indicators

    Get PDF
    The role of sustainable mobility and its impact on society and the environment is evident and recognized worldwide. Nevertheless, although there is a growing number of measures and projects that deal with sustainable mobility issues, it is not so easy to compare their results and, so far, there is no globally applicable set of tools and indicators that ensure holistic evaluation and facilitate replicability of the best practices. In this paper, based on the extensive literature review, we give a systematic overview of relevant and scientifically sound indicators that cover different aspects of sustainable mobility that are applicable in different social and economic contexts around the world. Overall, 22 sustainable mobility indicators have been selected and an overview of the applied measures described across the literature review has been presented

    Measuring the Information Society in Europe: From Definitions to Description

    Get PDF
    Information Society (IS) indicators describe the level of information society development achieved in a particular society in quantitative terms. They can serve a range of purposes related to providing a view of the society’s state: for example, following the evolution of IS or benchmarking IS with other territories. By considering changes over time, IS indicators also comprise a critical tool in the monitoring, evaluation and improvement of IS policy. Inevitably, the primary benefit of indicators lies in this capacity to guide policy-makers into proactive thinking i.e. to focus their attention on future priorities. The aims of this article are to examine how the evolution of the information society has been measured, and to relate European territories with each other by these measures. Constructing a comprehensive set of IS indicators requires a sound definition of the Information Society to establish meaningful benchmarks and to measure change. The task becomes complicated as it seems that IS is more or less ‘undefined’ at the moment. This means that IS is what one wants it to be: countries held as ñ€Ɠinformation societiesñ€ are those countries, which people think of being such – and not defined by, for example, achieving a level measured by some quantitative IS-related indicators. Tentative results show that despite this lack of a clear and single definition of Information Society (IS) one can derive some conclusions about what IS consists of by taking a look at previous IS projects having collected IS indicators. They indicate three different levels of IS. These levels range from the narrow technological and the intermediate techno-economic definitions to the broad, all-inclusive IS definition. The indicators used to measure IS can also be grouped by a lifecycle model. While there seems to be a lack of available consistent territorial data on IS, there is plenty of data available on the national level. Using this data and background variables the European state of IS is analyzed from a territorial perspective. This article is an outgrowth of the ESPON project ñ€ƓIdentification of Spatially Relevant aspects of the Information Societyñ€.

    Applied Evaluative Informetrics: Part 1

    Full text link
    This manuscript is a preprint version of Part 1 (General Introduction and Synopsis) of the book Applied Evaluative Informetrics, to be published by Springer in the summer of 2017. This book presents an introduction to the field of applied evaluative informetrics, and is written for interested scholars and students from all domains of science and scholarship. It sketches the field's history, recent achievements, and its potential and limits. It explains the notion of multi-dimensional research performance, and discusses the pros and cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In addition, it presents quantitative research assessment as an evaluation science, and focuses on the role of extra-informetric factors in the development of indicators, and on the policy context of their application. It also discusses the way forward, both for users and for developers of informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General Introduction and Synopsis) of a book entitled Applied Evaluative Bibliometrics, to be published by Springer in the summer of 201

    The link between the quality of knowledge management and financial performance – The case of Croatia

    Get PDF
    The paper investigates the link between the quality of knowledge management and financial performance of an organization, using the data from the research conducted in Croatia. The theoretical part of the paper presents the literature review on research concerning the link between knowledge management and financial performance. The empirical part of the paper investigates the before mentioned link using the quality of knowledge management success factors as a measure of knowledge management, and ROS and ROA as measures of organizational performance. Based on performed correlation tests, this research confirms that there is a link between knowledge management and financial performance.knowledge management, knowledge management success factors, measuring knowledge management success factors, financial performance, Croatia

    Inclusive Economy Indicators: Framework & Indicator Recommendations

    Get PDF
    This report provides a summary of our research and recommendations for indicators to measure inclusive economies

    Technical support for Life Sciences communities on a production grid infrastructure

    Get PDF
    Production operation of large distributed computing infrastructures (DCI) still requires a lot of human intervention to reach acceptable quality of service. This may be achievable for scientific communities with solid IT support, but it remains a show-stopper for others. Some application execution environments are used to hide runtime technical issues from end users. But they mostly aim at fault-tolerance rather than incident resolution, and their operation still requires substantial manpower. A longer-term support activity is thus needed to ensure sustained quality of service for Virtual Organisations (VO). This paper describes how the biomed VO has addressed this challenge by setting up a technical support team. Its organisation, tooling, daily tasks, and procedures are described. Results are shown in terms of resource usage by end users, amount of reported incidents, and developed software tools. Based on our experience, we suggest ways to measure the impact of the technical support, perspectives to decrease its human cost and make it more community-specific.Comment: HealthGrid'12, Amsterdam : Netherlands (2012

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    Developing a goal-oriented SDI assessment approach using GIDEON - the Dutch SDI implementation strategy - as a case study

    Get PDF
    In 2008, the Dutch government approved the GIDEON document as a policy aiming at the implementation of the National Spatial Data Infrastructure (NSDI) in the Netherlands. The execution of GIDEON should take place by pursuing seven implementation strategies which lead to the achievement of the GIDEON goals. GIDEON also expresses the need to monitor the progress of implementing its strategies and realization of its goals. Currently, the work has been started on monitoring the GIDEON implementation strategies. However, there is still a lack of knowledge and methods to monitor GIDEON goals realization. The challenge is to come up with an approach to assess to what extent these goals are achieved. As a response to the challenge of assessing the GIDEON goals, this paper explores the possibility of using the Multi-view SDI assessment framework (Grus et al., 2007). This paper presents and discusses the method that applies the Multi-view SDI assessment framework, its indicators and measurement methods to create a GIDEON assessment approach. The method of creating a GIDEON assessment approach consists of several procedural steps: formulating specific GIDEON objectives, organizing a one-day workshop involving focus group of specific stakeholders responsible for creation and execution of NSDI, asking the workshop participants to select from a long list those indicators that best measure the achievement of each GIDEON goals. The key step of GIDEON approach is a one-day workshop. The workshop participants represented all organizations that cooperated and/or created GIDEON. The workshop consisted of two parts: first part explained the context of a challenge of assessing GIDEON, second part included participants activity to select and come to the consensus on the list of indicators that would best measure GIDEON goals realization. Additionally, the participants were asked to evaluate and express feedback on the usefulness of the method of creating GIDEON assessment approach. The results show that several indicators that relate to specific SDI goals could be selected by a significant number of workshop participants. The indicators that have been selected are not the final ones yet, but provide a guideline and form a base of what has to be measured when assessing GIDEON goals. Involving the representatives of all parties committed to GIDEON into the process of GIDEON assessment approach creation will strengthen its robustness and acceptance. The results of the feedback form filled by each participant show that the presented method is useful or very useful to create GIDEON assessment approach. Additionally, some of the participants provided already their own indicators which are very specific for Dutch SDI monitoring.The method presented in this research, assuming that SDI goals are defined and the organizations that participate in SDI creation are known, can be applied in any other country to develop country-specific and practical SDI assessment approach

    The quantification of ICT development

    Get PDF
    Nowadays, economy and society in the age of information base on results of production chain (information-knowledge-acquaintance), the motor of processes is handling structured knowledge and communication. Quantifying and measuring of differences of the information society’s different parts raise similar problems like the question of the concept’s definition itself. Our main problem is defining information society in any other way, then we also have to measure in a different way maybe with different variables and methods. It follows that the topic contains wide range of measurable variables: several explaining variables can be listed from infrastructural parts measured in the most easiest way through knowledge-part can be measured a bit harder till hardly tangible willingness for using information. That is why most of the studies work with groups of variables and complex indexes as there is no one-dimension indicator can be measured simply and could be considered as an own one by any of the information societies. The measurement of factors generally raises different problems that can only be solved in different ways, therefore unified schemes or scenarios cannot be used for measuring a new factor. It is also important to note that it is not necessary to include all factors in everyday statistical surveys
    • 

    corecore