139,993 research outputs found

    Risk-Based Performance Metrics for Critical Infrastructure Protection? A Framework for Research and Analysis

    Get PDF
    Measuring things that do not occur, such as “deterred” or “prevented” terrorist attacks, can be difficult. Efforts to establish meaningful risk-based performance metrics and performance evaluation frameworks based on such metrics, for government agencies with counterterrorism missions, are arguably in a nascent state. However, by studying program theory, logic models, and performance evaluation theory, as well as studying how risk, deterrence, and resilience concepts may be leveraged to support antiterrorism efforts, one may propose a framework for a logic model or other performance evaluation approach. Such a framework may integrate these concepts to help proxy performance measurement for agencies with prevention and/or deterrence missions. This effort would not be without challenges

    VALUING COMMUNITY DEVELOPMENT THROUGH THE SOCIAL INCLUSION PROGRAMME (SICAP) 2015–2017 TOWARDS A FRAMEWORK FOR EVALUATION. ESRI RESEARCH SERIES NUMBER 77 FEBRUARY 2019

    Get PDF
    The Social Inclusion and Community Activation Programme (SICAP) represents a major component of Ireland’s community development strategy, led by the Department of Rural and Community Development (DRCD). The vision of SICAP is to improve the opportunities and life chances of those who are marginalised in society, experiencing unemployment or living in poverty through community development approaches, targeted supports and interagency collaboration, where the values of equality and inclusion are promoted and human rights are respected. In 2016, total expenditure on SICAP amounted to approximately €36 million (Pobal, 2016a). Using a mixed methodology, this report examines the extent to which community development programmes can or should be subject to evaluation, with a particular focus on SICAP. In doing so, the report draws on a rich body of information – including desk-based research; consultation workshops with members of local community groups (LCGs), local community workers (LCWs) and other key policy stakeholders; and an analysis of administrative data held by Pobal – on the characteristics of LCGs that received direct support under SICAP. The findings in this report relate to the delivery of the SICAP 2015–2017 programme which ended in December 2017. The aim of the study is to inform policy by shedding light on a number of issues including the following. Can community development be evaluated? What are the current metrics and methodologies suggested in the literature for evaluating community development interventions? What possible metrics can be used to evaluate community development interventions and how do these relate to the SICAP programme? How can a framework be developed that could potentially be used by SICAP for monitoring evaluation of its community development programme

    Global Human Resource Metrics

    Get PDF
    [Excerpt] What is the logic underlying global human resources (HR) measurement in your organization? In your organization, do you measure the contribution of global HR programs to organizational performance? Do you know what is the most competitive employee mix, e.g., proportion of expatriates vs. local employees, for your business units? (How) do you measure the cost and value of the different types of international work performed by your employees? In the globalized economy, organizations increasingly derive value from human resources, or “talent” as we shall also use the term here (Boudreau, Ramstad & Dowling, in press). The strategic importance of the workforce makes decisions about talent critical to organizational success. Informed decisions about talent require a strategic approach to measurement. However, measures alone are not sufficient, for measures without logic can create information overload, and decision quality rests in substantial part on the quality of measurements. An important element of enhanced global competitiveness is a measurement model for talent that articulates the connections between people and success, as well as the context and boundary conditions that affect those connections. This chapter will propose a framework within which existing and potential global HR measures can be organized and understood. The framework reflects the premise that measures exist to support and enhance decisions, and that strategic decisions require a logical connection between decisions about resources, such as talent, and the key organizational outcomes affected by those decisions. Such a framework may provide a useful mental model for both designers and users of HR measures

    A Tripartite Framework for Leadership Evaluation

    Get PDF
    The Tripartite Framework for Leadership Evaluation provides a comprehensive examination of the leadership evaluation landscape and makes key recommendations about how the field of leadership evaluation should proceed. The chief concern addressed by this working paper is the use of student outcome data as a measurement of leadership effectiveness. A second concern in our work with urban leaders is the absence or surface treatment of race and equity in nearly all evaluation instruments or processes. Finally, we call for an overhaul of the conventional cycle of inquiry, which is based largely on needs analysis and leader deficits, and incomplete use of evidence to support recurring short cycles within the larger yearly cycle of inquiry

    Supporting a Thriving Bay Area Performing Arts Ecosystem: A Mid-Point Assessment of the Hewlett Foundation's Performing Arts Program

    Get PDF
    As one of the largest institutional funders of performing arts in the San Francisco Bay Area, the Hewlett Foundation's Performing Arts Program (Program) plays an important role in the arts ecosystem across California. The Performing Arts Program works to "ensure continuity and innovation in the performing arts through the creation, performance, and appreciation of exceptional works that enrich the lives of individuals and benefit communities through the Bay Area." Monitoring and evaluation are integral to the Strategic Framework. It outlines metrics, short (2013) and longterm (2017) growth targets, and activities and strategies for each component of the Program, taking into consideration economic conditions, the arts landscape in California and current demographic trends in the Bay Area. Program staff built in evaluation activities that would enable the Program to determine if its strategies are effective, to measure how much progress has been made toward its goals, and to identify opportunities for learning and improving outcomes. In 2015, the Foundation partnered with Informing Change and Olive Grove to conduct a mid-point assessment of the Program's six-year Strategic Framework. The evaluation centers on four core questions, each of which has additional sub-questions (see Appendix A for a full list of the questions and subquestions). In partnership with Program staff, Informing Change and Olive Grove developed a plan to assess these questions using a mixed-methods approach. A primary data source for this assessment is interviews that solicit insight and feedback from six types of constituents: grantees from all three of the Program's component areas, peer arts funders, community-based arts leaders, and artists and cultural entrepreneurs (Appendix A includes a list of all interview informants and Appendix B provides interview protocols). The interview informant sample includes individuals and organizations connected to the Program as grantees or partners, as well as other key leaders in the arts ecosystem that do not receive funding. This assessment also draws heavily upon quantitative analysis of data about the portfolio funding (i.e., GIFTS, the Foundation's grant tracking software), grantees' work (i.e., Cultural Data Project (CDP), Audience Research Collaborative (ARC) and Grantee Perception Report (GPR)), and arts education (i.e., California Department of Education (CDE)). A review of existing literature and research studies provided data on changes in different fields and contextual information (Appendix C provides references for all works cited)

    Of models and metrics: the UK debate on assessing humanities research

    Get PDF
    Professor Michael Worton (UCL) is the Chair of an expert group set up in July 2006 by the Arts & Humanities Research Council to examine alternative ways to assess research. The group has developed a complex metrics-based system using quantitative information about a university department’s research activity and research outcomes to determine how to distribute billions of pounds in research funding distributed to UK Universities in the future. This paper, on the background to the group's work, was presented to a conference on Peer Review hosted by the European Science Foundation (ESF), the European Heads of Research Councils (EuroHORCs) and the Czech Science Foundation (Grantová agentura eské republiky, GA R), held in Prague on 12-13 October 2006. See also UCL News, 9 September 2006. Arts metrics plan revealed. http://www.ucl.ac.uk/news/news-articles/inthenews/itn06091

    Descriptive analysis of trends using metrics

    Get PDF

    Report on the Third Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE3)

    Get PDF
    This report records and discusses the Third Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE3). The report includes a description of the keynote presentation of the workshop, which served as an overview of sustainable scientific software. It also summarizes a set of lightning talks in which speakers highlighted to-the-point lessons and challenges pertaining to sustaining scientific software. The final and main contribution of the report is a summary of the discussions, future steps, and future organization for a set of self-organized working groups on topics including developing pathways to funding scientific software; constructing useful common metrics for crediting software stakeholders; identifying principles for sustainable software engineering design; reaching out to research software organizations around the world; and building communities for software sustainability. For each group, we include a point of contact and a landing page that can be used by those who want to join that group's future activities. The main challenge left by the workshop is to see if the groups will execute these activities that they have scheduled, and how the WSSSPE community can encourage this to happen

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    Decision support system for the long-term city metabolism planning problem

    Get PDF
    A Decision Support System (DSS) tool for the assessment of intervention strategies (Alternatives) in an Urban Water System (UWS) with an integral simulation model called “WaterMet²” is presented. The DSS permits the user to identify one or more optimal Alternatives over a fixed long-term planning horizon using performance metrics mapped to the TRUST sustainability criteria (Alegre et al., 2012). The DSS exposes lists of in-built intervention options and system performance metrics for the user to compose new Alternatives. The quantitative metrics are calculated by the WaterMet² model and further qualitative or user-defined metrics may be specified by the user or by external tools feeding into the DSS. A Multi-Criteria Decision Analysis (MCDA) approach is employed within the DSS to compare the defined Alternatives and to rank them with respect to a pre-specified weighting scheme for different Scenarios. Two rich, interactive Graphical User Interfaces, one desktop and one web-based, are employed to assist with guiding the end user through the stages of defining the problem, evaluating and ranking Alternatives. This mechanism provides a useful tool for decision makers to compare different strategies for the planning of UWS with respect to multiple Scenarios. The efficacy of the DSS is demonstrated on a northern European case study inspired by a real-life urban water system for a mixture of quantitative and qualitative criteria. The results demonstrate how the DSS, integrated with an UWS modelling approach, can be used to assist planners in meeting their long-term, strategic level sustainability objectives
    • …
    corecore