18,954 research outputs found

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    A Comparison of Quantitative and Qualitative Data from a Formative Usability Evaluation of an Augmented Reality Learning Scenario

    Get PDF
    The proliferation of augmented reality (AR) technologies creates opportunities for the devel-opment of new learning scenarios. More recently, the advances in the design and implementation of desktop AR systems make it possible the deployment of such scenarios in primary and secondary schools. Usability evaluation is a precondition for the pedagogical effectiveness of these new technologies and requires a systematic approach for finding and fixing usability problems. In this paper we present an approach to a formative usability evaluation based on heuristic evaluation and user testing. The basic idea is to compare and integrate quantitative and qualitative measures in order to increase confidence in results and enhance the descriptive power of the usability evaluation report.augmented reality, multimodal interaction, e-learning, formative usability evaluation, user testing, heuristic evaluation

    Public Web Mapping: preliminary usability evaluation

    Get PDF
    April 5 - 7, 200

    Usability evaluation of digital libraries: a tutorial

    Get PDF
    This one-day tutorial is an introduction to usability evaluation for Digital Libraries. In particular, we will introduce Claims Analysis. This approach focuses on the designers’ motivations and reasons for making particular design decisions and examines the effect on the user’s interaction with the system. The general approach, as presented by Carroll and Rosson(1992), has been tailored specifically to the design of digital libraries. Digital libraries are notoriously difficult to design well in terms of their eventual usability. In this tutorial, we will present an overview of usability issues and techniques for digital libraries, and a more detailed account of claims analysis, including two supporting techniques – simple cognitive analysis based on Norman’s ‘action cycle’ and Scenarios and personas. Through a graduated series of worked examples, participants will get hands-on experience of applying this approach to developing more usable digital libraries. This tutorial assumes no prior knowledge of usability evaluation, and is aimed at all those involved in the development and deployment of digital libraries

    Usability evaluation of a virtual museum interface

    Get PDF
    The Augmented Representation of Cultural Objects (ARCO) system provides software and interface tools to museum curators to develop virtual museum exhibitions, as well as a virtual environment for museum visitors over the World Wide Web or in informative kiosks. The main purpose of the system is to offer an enhanced educative and entertaining experience to virtual museum visitors. In order to assess the usability of the system, two approaches have been employed: a questionnaire based survey and a Cognitive Walkthrough session. Both approaches employed expert evaluators, such as domain experts and usability experts. The result of this study shows a fair performance of the followed approach, as regards the consumed time, financial and other resources, as a great deal of usability problems has been uncovered and many aspects of the system have been investigated. The knowledge gathered aims at creating a conceptual framework for diagnose usability problems in systems in the area of Virtual Cultural Heritage

    Moving Usability Testing onto the Web

    Get PDF
    Abstract: In order to remotely obtain detailed usability data by tracking user behaviors within a given web site, a server-based usability testing environment has been created. Web pages are annotated in such a way that arbitrary user actions (such as "mouse over link" or "click back button") can be selected for logging. In addition, the system allows the experiment designer to interleave interactive questions into the usability evaluation, which for instance could be triggered by a particular sequence of actions. The system works in conjunction with clustering and visualization algorithms that can be applied to the resulting log file data. A first version of the system has been used successfully to carry out a web usability evaluation

    Exploring the Usability of Municipal Web Sites: A Comparison Based on Expert Evaluation Results from Four Case Studies

    Get PDF
    The usability of public administration web sites is a key quality attribute for the successful implementation of the Information Society. Formative usability evaluation aims at finding and reporting usability problems as early as possible in the development process. The objective of this paper is to present and comparatively analyze the results of an expert usability evaluation of 4 municipality web sites. In order to document usability problems an extended set of heuristics was used that is based on two sources: usability heuristics and ergonomic criteria. The explanatory power of heuristics was supplemented with a set of usability guidelines. The evaluation results revealed that a set of specific tasks with clearly defined goals helps to identify many severe usability problems that occur frequently in the municipality web sites. A typical issue for this category of web sites is the lack of information support for the user.Formative Usability Evaluation, User Testing, Expert Evaluation, Heuristic Evaluation, Ergonomic Criteria, Usability Problem, Municipal Web Sites

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method
    • …
    corecore