10 research outputs found

    Capturing the distinction between task and device errors in a formal model of user behaviour

    Get PDF
    In any complex interactive human-computer system, people are likely to make errors during its operation. In this paper, we describe a validation study of an existing generic model of user behaviour. The study is based on the data and conclusions from an independent prior experiment. We show that the current model does successfully capture the key concepts investigated in the experiment, particularly relating to results to do with the distinction between task and device-specific errors. However, we also highlight some apparent weaknesses in the current model with respect to initialisation errors, based on comparison with previously unpublished (and more detailed) data from the experiment. The differences between data and observed model behaviour suggest the need for new empirical research to determine what additional factors are at work. We also discuss the potential use of formal models of user behaviour in both informing, and generating further hypotheses about the causes of human error

    On formalising interactive number entry on infusion pumps

    Get PDF
    We define the predictability of a user interface as the property that an idealised user can predict with sufficient certainty the effect of any action in a given state in a system, where state information is inferred from the perceptible output of the system. In our definition, the user is not required to have full knowledge of a history of actions from an initial state to the current state. Typically such definitions rely on cognitive and knowledge assumptions; in this paper we explore the notion in the situation where the user is an idealised expert and understands perfectly how the device works. In this situation predictability concerns whether the user can tell what state the device is in and accurately predict the consequences of an action from that state simply by looking at the device; normal human users can certainly do no better. We give a formal definition of predictability in higher order logic and explore how real systems can be verified against the property. We specify two real number entry interfaces in the healthcare domain (drug infusion pumps) as case studies of predictable and unpredictable user interfaces. We analyse the specifications with respect to our formal definition of predictability and thus show how to make unpredictable systems predictable

    TkWinHOL: A Tool for Doing Window Inference in HOL

    No full text
    Window inference is a method for contextual rewriting and refinement, supported by the HOL Window Inference Library. This paper describes a user-friendly interface for window inference. The interface permits the user to select subexpressions by pointing and clicking and to select transformations from menus. The correctness of each transformation step is proved automatically by the HOL system. The interface can be tailored to particular user-defined theories. One such extension, for program refinement, is described. 1 Introduction Though the original purpose of the HOL system [8] was as a tool for hardware verification, it has become popular also as a basis for software verification (see for example [1, 5, 7]). However, the theories built for supporting the software development process are normally difficult to use, especially if one does not have any previous detailed knowledge of the HOL system. In order to make such theories available to a general audience, it is essential that user..

    Formal modelling of cognitive interpretation

    Get PDF
    We formally specify the interpretation stage in a dual state space human-computer interaction cycle. This is done by extending / reorganising our previous cognitive architecture. In particular, we focus on shape related aspects of the interpretation process associated with device input prompts. A cash-point example illustrates our approach. Using the SAL model checking environment, we show how the extended cognitive architecture facilitates detection of prompt-shape induced human error
    corecore