909,409 research outputs found

    Evaluating Communication Campaigns

    Get PDF
    Summarizes presentations from a September 2007 conference on evaluating communication campaigns. Discusses the mechanism of effecting change through communication; the principles of advocacy evaluation; the design, methods, and tools; and lessons learned

    Towards Systemic Evaluation

    Get PDF
    Problems of conventional evaluation models can be understood as an impoverished ‘conversation’ between realities (of non-linearity, indeterminate attributes, and ever-changing context), and models of evaluating such realities. Meanwhile, ideas of systems thinking and complexity science—grouped here under the acronym STCS—struggle to gain currency in the big ‘E’ world of institutionalized evaluation. Four evaluation practitioners familiar with evaluation tools associated with STCS offer perspectives on issues regarding mainstream uptake of STCS in the big ‘E’ world. The perspectives collectively suggest three features of practicing systemic evaluation: (i) developing value in conversing between bounded values (evaluations) and unbounded reality (evaluand), with humility; (ii) developing response-ability with evaluand stakeholders based on reflexivity, with empathy; and (iii) developing adaptive rather than mere contingent use(fulness) of STCS ‘tools’ as part of evaluation praxis, with inevitable fallibility and an orientation towards bricolage (adaptive use). The features hint towards systemic evaluation as core to a reconfigured notion of developmental evaluation

    Do Android Taint Analysis Tools Keep Their Promises?

    Full text link
    In recent years, researchers have developed a number of tools to conduct taint analysis of Android applications. While all the respective papers aim at providing a thorough empirical evaluation, comparability is hindered by varying or unclear evaluation targets. Sometimes, the apps used for evaluation are not precisely described. In other cases, authors use an established benchmark but cover it only partially. In yet other cases, the evaluations differ in terms of the data leaks searched for, or lack a ground truth to compare against. All those limitations make it impossible to truly compare the tools based on those published evaluations. We thus present ReproDroid, a framework allowing the accurate comparison of Android taint analysis tools. ReproDroid supports researchers in inferring the ground truth for data leaks in apps, in automatically applying tools to benchmarks, and in evaluating the obtained results. We use ReproDroid to comparatively evaluate on equal grounds the six prominent taint analysis tools Amandroid, DIALDroid, DidFail, DroidSafe, FlowDroid and IccTA. The results are largely positive although four tools violate some promises concerning features and accuracy. Finally, we contribute to the area of unbiased benchmarking with a new and improved version of the open test suite DroidBench

    The Robust Reading Competition Annotation and Evaluation Platform

    Full text link
    The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-established in 2011, has become a de-facto evaluation standard for robust reading systems and algorithms. Concurrent with its second incarnation in 2011, a continuous effort started to develop an on-line framework to facilitate the hosting and management of competitions. This paper outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the competitions. The RRC Annotation and Evaluation Platform is a modular framework, fully accessible through on-line interfaces. It comprises a collection of tools and services for managing all processes involved with defining and evaluating a research task, from dataset definition to annotation management, evaluation specification and results analysis. Although the framework has been designed with robust reading research in mind, many of the provided tools are generic by design. All aspects of the RRC Annotation and Evaluation Framework are available for research use.Comment: 6 pages, accepted to DAS 201

    Tools for Evaluating and Strengthening Collaborative Partnerships

    Get PDF
    Topics for Today’s Workshop ‱ Building capacity in Community Collaborations through Evaluation: Discussion ‱ Tools for Evaluating and Strengthening Collaborative Partnerships: How the CDC uses evaluation to build capacity –Background –CDC Framework for Program Evaluation –Hands-on Exercise ‱ Review of Evaluation Tools handou

    Developing and using a rubric for evaluating evidence-based medicine point-of-care tools

    Get PDF
    Objective: The research sought to establish a rubric for evaluating evidence-based medicine (EBM) pointof- care tools in a health sciences library. Methods: The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control). Results: Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process. Conclusions: As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed

    Monitoring and evaluation of education in Nigeria: challenges and ways forwards

    Get PDF
    The article discusses the challenges preventing effective monitoring and evaluation of education in Nigeria. Secondary data was used to support the points raised in the article. The secondary data were sourced from print material and online publication by recognized institutions and individual author. There are many challenges militating against effective monitoring and evaluation of educational programme in Nigeria. Some of the challenges include; inadequate funding of monitoring and evaluation programmme, inadequate professional monitoring and evaluating officers, poor capacity development of monitoring and evaluating officers, corruptions, insecurity, inadequate monitoring and evaluation tools, political instability and lack of political support. To solve this challenges, this article recommends the following:  the government should provide: adequate funding for monitoring and evaluation programmme, employment of more professional evaluator and monitors, constant capacity development programme for  monitoring and evaluating officers, fight all institution corruption, provide security for Monitoring and Evaluating officers, provide adequate monitoring and evaluation tools, ensure political stability and the political officeholders should support the activities of monitoring and evaluation in the country

    Critical considerations on defining and measuring performance in public organizations

    Get PDF
    Performance evaluation plays a central role in improving public service quality and increasing efficiency and accountability in the public sector. New Public Management recommends performance evaluation as a tool for rationalizing public budgeting, promoting better reporting systems and developing internal diagnosis systems. This paper aims to analyze the characteristics of performance evaluation and to highlight its influences in public organizations. The study is based on review and analysis of academic research, government documents and personal perspectives. The paper argues that managerial practices and tools for defining and evaluating performance can be used for cultivating the “achievement culture” in public sector organizations.Public management, performance, evaluation, indicators
    • 

    corecore