31,013 research outputs found

    Falsification testing for usability inspection method assessment

    Get PDF
    We need more reliable usability inspection methods (UIMs), but assessment of UIMs has been unreliable [5]. We can only reliably improve UIMs if we have more reliable assessment. When assessing UIMs, we need to code analysts’ predictions as true or false positives or negatives, or as genuinely missed problems. Defenders of UIMs often claim that false positives cannot be accurately coded, i.e., that a prediction is true but has never shown up through user testing or other validation approaches. We show this and similar claims to be mistaken by briefly reviewing methods for reliable coding of each of five types of prediction outcome. We focus on falsification testing, which allows confident coding of false positives

    Cooperation between expert knowledge and data mining discovered knowledge: Lessons learned

    Get PDF
    Expert systems are built from knowledge traditionally elicited from the human expert. It is precisely knowledge elicitation from the expert that is the bottleneck in expert system construction. On the other hand, a data mining system, which automatically extracts knowledge, needs expert guidance on the successive decisions to be made in each of the system phases. In this context, expert knowledge and data mining discovered knowledge can cooperate, maximizing their individual capabilities: data mining discovered knowledge can be used as a complementary source of knowledge for the expert system, whereas expert knowledge can be used to guide the data mining process. This article summarizes different examples of systems where there is cooperation between expert knowledge and data mining discovered knowledge and reports our experience of such cooperation gathered from a medical diagnosis project called Intelligent Interpretation of Isokinetics Data, which we developed. From that experience, a series of lessons were learned throughout project development. Some of these lessons are generally applicable and others pertain exclusively to certain project types

    Error by design: Methods for predicting device usability

    Get PDF
    This paper introduces the idea of predicting ‘designer error’ by evaluating devices using Human Error Identification (HEI) techniques. This is demonstrated using Systematic Human Error Reduction and Prediction Approach (SHERPA) and Task Analysis For Error Identification (TAFEI) to evaluate a vending machine. Appraisal criteria which rely upon user opinion, face validity and utilisation are questioned. Instead a quantitative approach, based upon signal detection theory, is recommended. The performance of people using SHERPA and TAFEI are compared with heuristic judgement and each other. The results of these studies show that both SHERPA and TAFEI are better at predicting errors than the heuristic technique. The performance of SHERPA and TAFEI are comparable, giving some confidence in the use of these approaches. It is suggested that using HEI techniques as part of the design and evaluation process could help to make devices easier to use

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method

    Does Empirical Embeddedness Matter? Methodological Issues on Agent-Based Models for Analytical Social Science

    Get PDF
    The paper deals with the use of empirical data in social science agent-based models. Agent-based models are too often viewed just as highly abstract thought experiments conducted in artificial worlds, in which the purpose is to generate and not to test theoretical hypotheses in an empirical way. On the contrary, they should be viewed as models that need to be embedded into empirical data both to allow the calibration and the validation of their findings. As a consequence, the search for strategies to find and extract data from reality, and integrate agent-based models with other traditional empirical social science methods, such as qualitative, quantitative, experimental and participatory methods, becomes a fundamental step of the modelling process. The paper argues that the characteristics of the empirical target matter. According to characteristics of the target, ABMs can be differentiated into case-based models, typifications and theoretical abstractions. These differences pose different challenges for empirical data gathering, and imply the use of different validation strategies.Agent-Based Models, Empirical Calibration and Validation, Taxanomy of Models
    corecore