1,223,418 research outputs found

    Manage Your \u27Blind Flight\u27 - The Optimal Timing for IT Project Re-Evaluation

    Get PDF
    As the value of an IT project can change over time, management is in blind flight about the state of the project until the project has been re-evaluated. As each evaluation causes costs, continuous evaluation is economically unreasonable. Nevertheless, the blind flight should not take too long, because the project value can considerably deviate from its initial estimation and high losses can occur. To trade off costs of re-evaluation and potential loss of project value, this paper will elaborate upon an economic model that is able to determine the optimal time until re-evaluation considering the risky cash flows of a project. Based on a simulation, we find that it makes good economic sense to optimize the interval of re-evaluation. Therefore, companies are able to avoid financial loss caused by evaluating too early as well as hazarding project value caused by evaluating too late

    Are We Happy Yet?: Re-evaluating the Evaluation of Indigenous Community Development

    Get PDF
    As I was working on research into Indigenous community development, I wanted to get an overview of how things are going - are projects improving well-being? What is working and what isn\u27t? I found I couldn\u27t get a clear multi-dimensional picture. So I had to wonder, about evaluation criteria and what the alternatives were. How can we, as academics and researchers and allies, make sense of the available information in such a way that our work is meaningful to the Indigenous communities we work with

    Locking Nut with Stress-Distributing Insert

    Get PDF
    Reusable holders have been devised for evaluating high-temperature, plasma-resistant re-entry materials, especially fabrics. Typical material samples tested support thermal-protection-system damage repair requiring evaluation prior to re-entry into terrestrial atmosphere. These tests allow evaluation of each material to withstand the most severe predicted re-entry conditions

    Evaluation - the educational context

    Get PDF
    Evaluation comes in many shapes and sizes. It can be as simple and as grounded in day to day work as a clinical teacher refl ecting on a lost teaching opportunity and wondering how to do it better next time or as complex, top down and politically charged as a major government led evaluation of use of teaching funds with the subtext of re-allocating them. Despite these multiple spectra of scale, perceived ownership, fi nancial and political implications, the underlying principles of evaluation are remarkably consistent. To evaluate well, it needs to be clear who is evaluating what and why. From this will come notions of how it needs to be done to ensure the evaluation is meaningful and useful. This paper seeks to illustrate what evaluation is, why it matters, where to start if you want to do it and how to deal with evaluation that is external and imposed

    Re-evaluating Evaluation

    Get PDF
    Progress in machine learning is measured by careful evaluation on problems of outstanding common interest. However, the proliferation of benchmark suites and environments, adversarial attacks, and other complications has diluted the basic evaluation model by overwhelming researchers with choices. Deliberate or accidental cherry picking is increasingly likely, and designing well-balanced evaluation suites requires increasing effort. In this paper we take a step back and propose Nash averaging. The approach builds on a detailed analysis of the algebraic structure of evaluation in two basic scenarios: agent-vs-agent and agent-vs-task. The key strength of Nash averaging is that it automatically adapts to redundancies in evaluation data, so that results are not biased by the incorporation of easy tasks or weak agents. Nash averaging thus encourages maximally inclusive evaluation -- since there is no harm (computational cost aside) from including all available tasks and agents.Comment: NIPS 2018, final versio

    The Robust Reading Competition Annotation and Evaluation Platform

    Full text link
    The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-established in 2011, has become a de-facto evaluation standard for robust reading systems and algorithms. Concurrent with its second incarnation in 2011, a continuous effort started to develop an on-line framework to facilitate the hosting and management of competitions. This paper outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the competitions. The RRC Annotation and Evaluation Platform is a modular framework, fully accessible through on-line interfaces. It comprises a collection of tools and services for managing all processes involved with defining and evaluating a research task, from dataset definition to annotation management, evaluation specification and results analysis. Although the framework has been designed with robust reading research in mind, many of the provided tools are generic by design. All aspects of the RRC Annotation and Evaluation Framework are available for research use.Comment: 6 pages, accepted to DAS 201

    A classification of RE papers:(A)re we researching or designing RE techniques?

    Get PDF
    Discussion of a paper in RE program committees is often\ud complicated by lack of agreement about evaluation criteria\ud to be applied to the paper. For some years now, successive\ud program chairs have attempted to increase clarity by\ud including a paper classification in their CFP, and making the\ud evaluation criteria per paper class explicit. This short note\ud presents a paper classification based on this experience. It\ud can be used as guide by program chairs. It can also be used\ud by authors as well as reviewers to understand what kind of\ud paper they are writing or reviewing, and what criteria should\ud be applied in evaluating the paper
    • ā€¦
    corecore