13,959 research outputs found

    Benchmarks for Parity Games (extended version)

    Full text link
    We propose a benchmark suite for parity games that includes all benchmarks that have been used in the literature, and make it available online. We give an overview of the parity games, including a description of how they have been generated. We also describe structural properties of parity games, and using these properties we show that our benchmarks are representative. With this work we provide a starting point for further experimentation with parity games.Comment: The corresponding tool and benchmarks are available from https://github.com/jkeiren/paritygame-generator. This is an extended version of the paper that has been accepted for FSEN 201

    Easy Innovation and the Iron Cage: Best Practice, Benchmarking, Ranking, and the Management of Organizational Creativity

    Get PDF
    The use of what came to be known as best practices, benchmarking, and ranking, which took corporate America by storm in the 1980s as a method for managing innovation, has seeped into government and nonprofit organizations in the intervening years. In fact, as H. George Frederickson demonstrates in this Kettering Foundation occasional paper, these practices have proven to be counterproductive both in the business and the public sector. Frederickson suggests, instead, a more flexible, less directive, model he calls "sustained innovation." He offers abundant evidence that this model is more effective in producing organizational effectiveness

    Principles in Patterns (PiP) : Evaluation of Impact on Business Processes

    Get PDF
    The innovation and development work conducted under the auspices of the Principles in Patterns (PiP) project is intended to explore and develop new technology-supported approaches to curriculum design, approval and review. An integral component of this innovation is the use of business process analysis and process change techniques - and their instantiation within the C-CAP system (Class and Course Approval Pilot) - in order to improve the efficacy of curriculum approval processes. Improvements to approval process responsiveness and overall process efficacy can assist institutions in better reviewing or updating curriculum designs to enhance pedagogy. Such improvements also assume a greater significance in a globalised HE environment, in which institutions must adapt or create curricula quickly in order to better reflect rapidly changing academic contexts, as well as better responding to the demands of employment marketplaces and the expectations of professional bodies. This is increasingly an issue for disciplines within the sciences and engineering, where new skills or knowledge need to be rapidly embedded in curricula as a response to emerging technological or environmental developments. All of the aforementioned must also be achieved while simultaneously maintaining high standards of academic quality, thus adding a further layer of complexity to the way in which HE institutions engage in "responsive curriculum design" and approval. This strand of the PiP evaluation therefore entails an analysis of the business process techniques used by PiP, their efficacy, and the impact of process changes on the curriculum approval process, as instantiated by C-CAP. More generally the evaluation is a contribution towards a wider understanding of technology-supported process improvement initiatives within curriculum approval and their potential to render such processes more transparent, efficient and effective. Partly owing to limitations in the data required to facilitate comparative analyses, this evaluation adopts a mixed approach, making use of qualitative and quantitative methods as well as theoretical techniques. These approaches combined enable a comparative evaluation of the curriculum approval process under the "new state" (i.e. using C-CAP) and under the "previous state". This report summarises the methodology used to enable comparative evaluation and presents an analysis and discussion of the results. As the report will explain, the impact of C-CAP and its ability to support improvements in process and document management has resulted in the resolution of numerous process failings. C-CAP has also demonstrated potential for improvements in approval process cycle time, process reliability, process visibility, process automation, process parallelism and a reduction in transition delays within the approval process, thus contributing to considerable process efficiencies; although it is acknowledged that enhancements and redesign may be required to take advantage of C-CAP's potential. Other aspects pertaining to C-CAP's impact on process change, improvements to document management and the curation of curriculum designs will also be discussed

    Adaptation of WASH Services Delivery to Climate Change and Other Sources of Risk and Uncertainty

    Get PDF
    This report urges WASH sector practitioners to take more seriously the threat of climate change and the consequences it could have on their work. By considering climate change within a risk and uncertainty framework, the field can use the multitude of approaches laid out here to adequately protect itself against a range of direct and indirect impacts. Eleven methods and tools for this specific type of risk management are described, including practical advice on how to implement them successfully

    A systems approach to evaluate One Health initiatives

    Get PDF
    Challenges calling for integrated approaches to health, such as the One Health (OH) approach, typically arise from the intertwined spheres of humans, animals, and ecosystems constituting their environment. Initiatives addressing such wicked problems commonly consist of complex structures and dynamics. As a result of the EU COST Action (TD 1404) “Network for Evaluation of One Health” (NEOH), we propose an evaluation framework anchored in systems theory to address the intrinsic complexity of OH initiatives and regard them as subsystems of the context within which they operate. Typically, they intend to influence a system with a view to improve human, animal, and environmental health. The NEOH evaluation framework consists of four overarching elements, namely: (1) the definition of the initiative and its context, (2) the description of the theory of change with an assessment of expected and unexpected outcomes, (3) the process evaluation of operational and supporting infrastructures (the “OH-ness”), and (4) an assessment of the association(s) between the process evaluation and the outcomes produced. It relies on a mixed methods approach by combining a descriptive and qualitative assessment with a semi-quantitative scoring for the evaluation of the degree and structural balance of “OH-ness” (summarised in an OH-index and OH-ratio, respectively) and conventional metrics for different outcomes in a multi-criteria-decision-analysis. Here, we focus on the methodology for Elements (1) and (3) including ready-to-use Microsoft Excel spreadsheets for the assessment of the “OH-ness”. We also provide an overview of Element (2), and refer to the NEOH handbook for further details, also regarding Element (4) (http://neoh.onehealthglobal.net). The presented approach helps researchers, practitioners, and evaluators to conceptualise and conduct evaluations of integrated approaches to health and facilitates comparison and learning across different OH activities thereby facilitating decisions on resource allocation. The application of the framework has been described in eight case studies in the same Frontiers research topic and provides first data on OH-index and OH-ratio, which is an important step towards their validation and the creation of a dataset for future benchmarking, and to demonstrate under which circumstances OH initiatives provide added value compared to disciplinary or conventional health initiatives

    Principles in Patterns (PiP) : Piloting of C-CAP - Evaluation of Impact and Implications for System and Process Development

    Get PDF
    The Principles in Patterns (PiP) project is leading a programme of innovation and development work intended to explore and develop new technology-supported approaches to curriculum design, approval and review. It is anticipated that such technology-supported approaches can improve the efficacy of curriculum approval processes at higher education (HE) institutions, thereby improving curriculum responsiveness and enabling improved and rapid review mechanisms which may produce enhancements to pedagogy. Curriculum design in HE is a key "teachable moment" and often remains one of the few occasions when academics will plan and structure their intended teaching. Technology-supported curriculum design therefore presents an opportunity for improving academic quality, pedagogy and learning impact. Approaches that are innovative in their use of technology offer the promise of an interactive curriculum design process within which the designer is offered system assistance to better adhere to pedagogical best practice, is exposed to novel and high impact learning designs from which to draw inspiration, and benefits from system support to detect common design issues, many of which can delay curriculum approval and distract academic quality teams from monitoring substantive academic issues. This strand of the PiP evaluation (WP7:38) attempts to understand the impact of the PiP Class and Course Approval Pilot (C-CAP) system within specific stakeholder groups and seeks to understand the extent to which C-CAP is considered to support process improvements. As process improvements and changes were studied in a largely quantitative capacity during a previous but related evaluative strand, this strand includes the gathering of additional qualitative data to better understand and verify the business process improvements and change effected by C-CAP. This report therefore summarises the outcome of C-CAP piloting within a University faculty, presents the methodology used for evaluation, and the associated analysis and discussion. More generally this report constitutes an additional evaluative contribution towards a wider understanding of technology-supported approaches to curriculum design and approval in HE institutions and their potential in improving process transparency, efficiency and effectiveness

    Development of Evaluation Systems – Evaluation Capacity Building in the Framework of the New Challenges of EU Structural Policy

    Get PDF
    The paper presents the changing role of evaluation for public policies and the increasing importance of comprehensive evaluation systems for more effective and learning public institutions. In this context, evaluation systems are becoming an essential element of governance and democratic processes. Within the framework of the current EU Structural Policy and its new challenges (enlargement, new priorities, regional policy paradigm), the creation and the development of evaluation capacities and comprehensive evaluation systems becomes increasingly an instrument for organizational learning and policy improvement. In this context, the development of evaluation systems is not only fundamental for the new member states, but also for countries with a poor evaluation culture or a fragmented evaluation system. However, evaluation still lacks the necessary resources and infrastructure to become an accepted and integrated governance tool, despite the growth of the European Evaluation Community and the take-off of several National Evaluation Societies in the past years. Unexploited potentials exist on the supply side (evaluators, skills, training, dialogue), but also on the demand side (commissioning, data monitoring, use of evaluations) of the evaluation system. Evaluation Capacity Building (ECB), as an approach for the development of evaluation systems, is the integrated and planned development of skills, resources and infrastructures and the intentional shift towards an evaluation culture in an organization, department or government. Nonetheless, getting to grips with the institutionalisation of the discipline of evaluation and the building of an ongoing evaluation capacity turns out to be extremely difficult: Which are concrete measures of ECB? Who is responsible for the measures, for the implementation? What are the target groups? When has the ECB been successful? are only some of the questions that come up.

    Experimental Aspects of Synthesis

    Full text link
    We discuss the problem of experimentally evaluating linear-time temporal logic (LTL) synthesis tools for reactive systems. We first survey previous such work for the currently publicly available synthesis tools, and then draw conclusions by deriving useful schemes for future such evaluations. In particular, we explain why previous tools have incompatible scopes and semantics and provide a framework that reduces the impact of this problem for future experimental comparisons of such tools. Furthermore, we discuss which difficulties the complex workflows that begin to appear in modern synthesis tools induce on experimental evaluations and give answers to the question how convincing such evaluations can still be performed in such a setting.Comment: In Proceedings iWIGP 2011, arXiv:1102.374
    corecore