19 research outputs found

    Inspection based evaluations

    Get PDF
    Usability inspection methods (UIMs) remain an important discount method for usability evaluation. They can be applied to any designed artefact during development: a paper prototype, a storyboard, a working prototype (e.g., in Macromedia Flash™ or in Microsoft PowerPoint™), tested production software, or an installed public release. They are analytical evaluation methods, which involve no typical end users, unlike empirical methods such as user testing. UIMs only require availability of a designed artefact and trained analysts. Thus, evaluation is possible with low resources (hence discount methods). Although risks arise from low resources, well-informed practices disproportionately improve analyst performance, improving cost-benefit ratios. This chapter introduces UIMs, covering six and one further method, and provides approaches to assessing existing, emerging and future UIMs and their effective uses

    Falsification testing for usability inspection method assessment

    Get PDF
    We need more reliable usability inspection methods (UIMs), but assessment of UIMs has been unreliable [5]. We can only reliably improve UIMs if we have more reliable assessment. When assessing UIMs, we need to code analysts’ predictions as true or false positives or negatives, or as genuinely missed problems. Defenders of UIMs often claim that false positives cannot be accurately coded, i.e., that a prediction is true but has never shown up through user testing or other validation approaches. We show this and similar claims to be mistaken by briefly reviewing methods for reliable coding of each of five types of prediction outcome. We focus on falsification testing, which allows confident coding of false positives

    Creative Worthwhile Interaction Design

    Get PDF
    Over the last two decades, creative, agile, lean and strategic design approaches have become increasingly prevalent in the development of interactive technologies, but tensions exist with longer established approaches such as human factors engineering and user-centered design. These tensions can be harnessed productively by first giving equal status in principle to creative, business and agile engineering practices, and then supporting this with flexible critical approaches and resources that can balance and integrate a range of multidisciplinary design practices

    Modelling usability inspection to understand evaluator judgement and performance

    No full text
    This thesis presents a model of evaluator behaviour in usability evaluations and describes an evaluator centred approach to analytical usability evaluation assessment. Usability evaluations involving multiple evaluators often produce inconsistent results, even when using the same usability evaluation method. This would suggest that individual evaluator judgement is influenced by resources other than those provided by the usability evaluation method. This research sought to discover factors external to usability evaluation methods that influence evaluators in their decision making during usability evaluations, and thus, explaining such inconsistencies between evaluators. The research involved conducting two large analytical evaluations using multiple evaluators and subjecting the results to falsification testing to validate evaluation predictions. Extended Structured Problem Report Formats were specifically designed for usability problem reporting to enable the confident coding of all possible prediction types. The problem reports required explanation for decisions made in problem discovery and analysis thus aiding the identification of factors influencing evaluator judgement. The results show evaluation methods provide little support in usability problem discovery and analysis; hence evaluators rely on a variety of individual knowledge resources such as technical, design, domain and user knowledge for example. Such knowledge resources were not common to all evaluators. The absence, or inappropriate perception of different knowledge resources resulted in missed or inappropriately analysed candidate usability problems. The results provide a model of evaluator behaviour that explains why some evaluators discover usability problems that others fail to discover, and how evaluators make appropriate and inappropriate decisions about candidate usability problems that result in true and false positive, and true and false negative predictions.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Why and when five test users aren’t enough

    No full text
    Nielsen’s claim that “Five Users are Enough ” [5] is based on a statistical formula [2] that makes unwarranted assumptions about individual differences in problem discovery, combined with optimistic setting of values for a key variable. We present the initial Landauer-Nielsen formula and recent evidence that it can fail spectacularly to calculate the required number of test users for a realistic web-based test. We explain these recent results by examining the assumptions behind the formula. We then re-examine some of our own data, and find that, while the Landauer-Nielsen formula does hold, this is only the case for simple problem counts. An analysis of problem frequency and severity indicates that highly misleading results could have resulted when the number of required users is almost doubled. Lastly, we identify structure and components of a more realistic approach to estimating test user requirements

    Understanding Inspection Methods: Lessons from an Assessment of Heuristic Evaluation

    No full text
    The Heuristic Evaluation method was applied by 99 analysts working in groups to an office application’s drawing editor. The use of structured problem report formats eased merging of analysts’ predictions and the subsequent association of a set of actual problems, which was extracted from user test data. The user tests were based on tasks designed to ensure that all predicted problems would be thoroughly addressed. Analysis of accurate and inaccurate predictions has supported the derivation of the DR-AR model for usability inspection method effectiveness. The model distinguishes between the discovery of candidate (possible) problems and their subsequent confirmation (or elimination) as probable problems. We confirm previous findings that heuristics do not support the discovery of possible usability problems. Our results also show that heuristics were most used appropriately to confirm possible problems that turned out to have low impact or frequency. Otherwise, heuristics are used inappropriately in a way that could lead to poor design changes. Heuristics are also very poor at eliminating improbable problems (65% of all predictions were false), and thus mostly incorrectly confirm false predictions. Overall, heuristics provide a poor analyst resource for the successful elimination/confirmation of im/probable problem predictions. Analysis of false predictions reveals that more effective analyst resources are knowledge of users, tasks, interaction, application domains, the application itself and design knowledge from HCI. Using the DR-AR model, we derive a strategy for UIM improvement

    Testing a conjecture based on the DR-AR Model of UIM effectiveness

    No full text

    Reconditioned Merchandise: Extended Structured Report Formats in Usability Inspection

    No full text
    Structured Problem Report Formats have been key to improving the assessment of usability methods. Once extended to record analysts' rationales, they not only reveal analyst behaviour but also change it. We report on two versions of an Extended Structured Report Format for usability problems, briefly noting their impact on analyst behaviour, but more extensively presenting insights into decision making during usability inspection, thus validating and refining a model of evaluation performance
    corecore