34 research outputs found

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method

    Neither Grasshopper nor Ant : learning from coding for fun and from gaming [WIP]

    Get PDF

    Visual scanning as a reference framework for interactive representation design

    Get PDF
    International audienceWhen designing a representation, the designer implicitly formulates a sequence of visual tasks required to understand and use the representation effectively. This paper aims at making the sequence of visual tasks explicit in order to help designers elicit their design choices. In particular, we present a set of concepts to systematically analyse what a user must theoretically do to decipher representations. The analysis consists of a decomposition of the activity of scanning into elementary visualization operations. We show how the analysis applies to various existing representations, and how expected benefits can be expressed in terms of elementary operations. The set of elementary operations form the basis of a shared language for representation designers. The decomposition highlights the challenges encountered by a user when deciphering a representation and helps designers to exhibit possible flaws in their design, justify their choices, and compare designs. We also show that interaction with a representation can be considered as facilitation to perform the elementary operations

    Designing Engaging Learning Experiences in Programming

    Get PDF
    In this paper we describe work to investigate the creation of engaging programming learning experiences. Background research informed the design of four fieldwork studies to explore how programming tasks could be framed to motivate learners. Our empirical findings from these four field studies are summarized here, with a particular focus upon one – Whack a Mole – which compared the use of a physical interface with the use of a screen-based equivalent interface to obtain insights into what made for an engaging learning experience. Emotions reported by two sets of participant undergraduate students were analyzed, identifying the links between the emotions experienced during programming and their origin. Evidence was collected of the very positive emotions experienced by learners programming with a physical interface (Arduino) in comparison with a similar program developed using a screen-based equivalent interface. A follow-up study provided further evidence of the motivation of personalized design of programming tangible physical artefacts. Collating all the evidence led to the design of a set of ‘Learning Dimensions’ which may provide educators with insights to support key design decisions for the creation of engaging programming learning experiences

    Classification of Polarimetric SAR Data Using Dictionary Learning

    Get PDF
    End-user development (EUD) research has yielded a variety of novel environments and techniques, often accompanied by lab-based usability studies that test their effectiveness in the completion of representative real-world tasks. While lab studies play an important role in resolving frustrations and demonstrating the potential of novel tools, they are insufficient to accurately determine the acceptance of a technology in its intended context of use, which is highly dependent on the diverse and dynamic requirements of its users, as we show here. As such, usability in the lab is unlikely to represent usability in the field. To demonstrate this, we first describe the results of a think-aloud usability study of our EUD tool “Jeeves”, followed by two case studies where Jeeves was used by psychologists in their work practices. Common issues in the artificial setting were seldom encountered in the real context of use, which instead unearthed new usability issues through unanticipated user needs. We conclude with considerations for usability evaluation of EUD tools that enable development of software for other users, including planning for collaborative activities, supporting developers to evaluate their own tools, and incorporating longitudinal methods of evaluation.Postprin

    Syntactic Complexity Metrics and the Readability of Programs in a Functional Computer Language

    Get PDF
    This article reports on the defintion and the measutement of the software complexity metrics of Halstead and McCabe for programs written in the functional programming language Miranda. An automated measurement of these metrics is described. In a case study, the correlation is established between the complexity metrics and the expert assessment of the readability of programs in Miranda, and compared with those for programs in Pascal

    Diagramming — an introduction

    No full text
    corecore