13,553 research outputs found

    Reviewing and extending the five-user assumption: A grounded procedure for interaction evaluation

    Get PDF
    " © ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human Interaction (TOCHI), {VOL 20, ISS 5, (November 2013)} http://doi.acm.org/10.1145/2506210 "The debate concerning how many participants represents a sufficient number for interaction testing is well-established and long-running, with prominent contributions arguing that five users provide a good benchmark when seeking to discover interaction problems. We argue that adoption of five users in this context is often done with little understanding of the basis for, or implications of, the decision. We present an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the way in which the original research that suggested it has been applied. This includes its blind adoption and application in some studies, and complaints about its inadequacies in others. We argue that the five-user assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields such as medical device design, or in business and information applications. The analysis that we present allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and summative evaluations, and for gathering information in order to make critical decisions during the interaction testing, while respecting the aim of the evaluation and allotted budget. This approach – which we call the ‘Grounded Procedure’ – is introduced and its value argued.The MATCH programme (EPSRC Grants: EP/F063822/1 EP/G012393/1

    Making evaluations matter: a practical guide for evaluators

    Get PDF
    This guide is primarily for evaluators working in the international development sector. It is also useful for commissioner of evaluations, evaluation managers and M&E officers. The guide explains how to make evaluations more useful. It helps to better understand conceptual issues and appreciate how evaluations can contribute to changing mindsets and empowering stakeholders. On a practical level, the guide presents core guiding principles and pointers on how to design and facilitate evaluations that matter. Furthermore, it shows how to get primary intended users and other key stakeholders to contribute effectively to the evaluation proces

    Principles in Patterns (PiP) : Heuristic Evaluation of Course and Class Approval Online Pilot (C-CAP)

    Get PDF
    The PiP Evaluation Plan documents four distinct evaluative strands, the first of which entails an evaluation of the PiP system pilot (WP7:37). Phase 1 of this evaluative strand focuses on the heuristic evaluation of the PiP Course and Class Approval Online Pilot system (C-CAP). Heuristic evaluation is an established usability inspection and testing technique and is most commonly deployed in Human-Computer Interaction (HCI) research, e.g. to test user interface designs, technology systems testing, etc. The success of heuristic evaluation in detecting 'major' and 'minor' usability problems is well documented, but its principal limitation is its inability to capture data on all possible usability problems. For this reason heuristic evaluation is often used as a precursor to user testing, e.g. so that user testing focuses on deeper system issues rather than on those that can easily be debugged. Heuristic evaluation nevertheless remains an important usability inspection technique and research continues to demonstrate its success in detecting usability problems which would otherwise evade detection in user testing sessions. For this reason experts maintain that heuristic evaluation should be used to complement user testing. This is reflected in the PiP Evaluation Plan, which proposes protocol analysis, stimulated recall and pre- and post-test questionnaire instruments to comprise user testing (see WP7:37 phases 2, 3 and 4 of PiP Evaluation Plan). This brief report summarises the methodology deployed, presents the results of the heuristic evaluation and proposes solutions or recommendations to address the heuristic violations that were found to exist in the C-CAP system. It is anticipated that some solutions will be implemented within the lifetime of the project. This is consistent with the incremental systems design methodology that PiP has adopted

    Disease Surveillance Networks Initiative Global: Final Evaluation

    Get PDF
    In August 2009, the Rockefeller Foundation commissioned an independent external evaluation of the Disease Surveillance Networks (DSN) Initiative in Asia, Africa, and globally. This report covers the results of the global component of the summative and prospective1 evaluation, which had the following objectives:[1] Assessment of performance of the DSN Initiative, focused on its relevance, effectiveness/impact, and efficiency within the context of the Foundation's initiative support.[2] Assessment of the DSN Initiative's underlying hypothesis: robust trans-boundary, multi-sectoral/cross-disciplinary collaborative networks lead to improved disease surveillance and response.[3] Assessment of the quality of Foundation management (value for money) for the DSN Initiative.[4] Contribute to the field of philanthropy by:a. Demonstrating the use of evaluations in grantmaking, learning and knowledge management; andb. Informing the field of development evaluation about methods and models to measure complex networks

    Evaluation Strategy for the Re-Development of the Displays and Visitor Facilities at the Museum and Art Gallery, Kelvingrove

    Get PDF
    No abstract available

    Using Simulation to Aid Decision Making in Managing the Usability Evaluation Process

    Get PDF
    Context: This paper is developed in the context of Usability Engineering. More specifically, it focuses on the use of modelling and simulation to help decision-making in the scope of usability evaluation. Objective: The main goal of this paper is to present UESim: a System Dynamics simulation model to help decision-making in the make-up of the usability evaluation team during the process of usability evaluation. Method: To develop this research we followed four main research phases: a) study identification, b) study development, c) running and observation and finally, d) reflexion. In relation with these phases the paper describes the literature revision, the model building and validation, the model simulation and its results and finally the reflexion on it. Results: We developed and validated a model to simulate the usability evaluation process. Through three different simulations we analysed the effects of different compositions of the evaluation team on the outcome of the evaluation. The simulation results show the utility of the model in the decision making of the usability evaluation process by changing the number and expertise of evaluators employed. Conclusion: One of the main advantages of using such a simulation model is that it allows developers to observe the evolution of the key indicators of the evaluation process over time. UESim represents a customisable tool to help decision-making in the management of the usability evaluation process, since it makes it possible to analyse how the key process indicators are affected by the main management options of the Usability Evaluation Process

    Best Practices for Evaluating Flight Deck Interfaces for Transport Category Aircraft with Particular Relevance to Issues of Attention, Awareness, and Understanding CAST SE-210 Output 2 Report 6 of 6

    Get PDF
    Attention, awareness, and understanding of the flight crew are a critical contributor to safety and the flight deck plays a critical role in supporting these cognitive functions. Changes to the flight deck need to be evaluated for whether the changed device provides adequate support for these functions. This report describes a set of diverse evaluation methods. The report recommends designing the interface-evaluation to span the phases of the device development, from early to late, and it provides methods appropriate at each phase. It describes the various ways in which an interface or interface component can fail to support awareness as potential issues to be assessed in evaluation. It summarizes appropriate methods to evaluate different issues concerning inadequate support for these functions, throughout the phases of development
    corecore