33,758 research outputs found

    Methodological development

    Get PDF
    Book description: Human-Computer Interaction draws on the fields of computer science, psychology, cognitive science, and organisational and social sciences in order to understand how people use and experience interactive technology. Until now, researchers have been forced to return to the individual subjects to learn about research methods and how to adapt them to the particular challenges of HCI. This is the first book to provide a single resource through which a range of commonly used research methods in HCI are introduced. Chapters are authored by internationally leading HCI researchers who use examples from their own work to illustrate how the methods apply in an HCI context. Each chapter also contains key references to help researchers find out more about each method as it has been used in HCI. Topics covered include experimental design, use of eyetracking, qualitative research methods, cognitive modelling, how to develop new methodologies and writing up your research

    Toward a document evaluation methodology: What does research tell us about the validity and reliability of evaluation methods?

    Get PDF
    Although the usefulness of evaluating documents has become generally accepted among communication professionals, the supporting research that puts evaluation practices empirically to the test is only beginning to emerge. This article presents an overview of the available research on troubleshooting evaluation methods. Four lines of research are distinguished concerning the validity of evaluation methods, sample composition, sample size, and the implementation of evaluation results during revisio

    Investigating heuristic evaluation as a methodology for evaluating pedagogical software: An analysis employing three case studies

    Get PDF
    This paper looks specifically at how to develop light weight methods of evaluating pedagogically motivated software. Whilst we value traditional usability testing methods this paper will look at how Heuristic Evaluation can be used as both a driving force of Software Engineering Iterative Refinement and end of project Evaluation. We present three case studies in the area of Pedagogical Software and show how we have used this technique in a variety of ways. The paper presents results and reflections on what we have learned. We conclude with a discussion on how this technique might inform on the latest developments on delivery of distance learning. © 2014 Springer International Publishing

    Exploring the Usability of Municipal Web Sites: A Comparison Based on Expert Evaluation Results from Four Case Studies

    Get PDF
    The usability of public administration web sites is a key quality attribute for the successful implementation of the Information Society. Formative usability evaluation aims at finding and reporting usability problems as early as possible in the development process. The objective of this paper is to present and comparatively analyze the results of an expert usability evaluation of 4 municipality web sites. In order to document usability problems an extended set of heuristics was used that is based on two sources: usability heuristics and ergonomic criteria. The explanatory power of heuristics was supplemented with a set of usability guidelines. The evaluation results revealed that a set of specific tasks with clearly defined goals helps to identify many severe usability problems that occur frequently in the municipality web sites. A typical issue for this category of web sites is the lack of information support for the user.Formative Usability Evaluation, User Testing, Expert Evaluation, Heuristic Evaluation, Ergonomic Criteria, Usability Problem, Municipal Web Sites

    A Load of Cobbler’s Children: Beyond the Model Designing Processor

    Get PDF
    HCI has developed rich understandings of people at work and at play with technology: most people that is, except designers, who remain locked in the information processing paradigm of first wave HCI. Design methods are validated as if they were computer programs, expected to produce the same results on a range of architectures and hardware. Unfortunately, designers are people, and thus interfere substantially (generally to good effects) with the ‘code’ of design methods. We need to rethink the evaluation and design of design and evaluation methods in HCI. A logocentric proposal based on resource function vocabularies is presented

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method
    corecore