48,058 research outputs found

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    Methodological development

    Get PDF
    Book description: Human-Computer Interaction draws on the fields of computer science, psychology, cognitive science, and organisational and social sciences in order to understand how people use and experience interactive technology. Until now, researchers have been forced to return to the individual subjects to learn about research methods and how to adapt them to the particular challenges of HCI. This is the first book to provide a single resource through which a range of commonly used research methods in HCI are introduced. Chapters are authored by internationally leading HCI researchers who use examples from their own work to illustrate how the methods apply in an HCI context. Each chapter also contains key references to help researchers find out more about each method as it has been used in HCI. Topics covered include experimental design, use of eyetracking, qualitative research methods, cognitive modelling, how to develop new methodologies and writing up your research

    Analysis of research methodologies for neurorehabilitation

    Get PDF

    Using multimedia interfaces for speech therapy

    Get PDF

    Principles in Patterns (PiP) : Project Evaluation Synthesis

    Get PDF
    Evaluation activity found the technology-supported approach to curriculum design and approval developed by PiP to demonstrate high levels of user acceptance, promote improvements to the quality of curriculum designs, render more transparent and efficient aspects of the curriculum approval and quality monitoring process, demonstrate process efficacy and resolve a number of chronic information management difficulties which pervaded the previous state. The creation of a central repository of curriculum designs as the basis for their management as "knowledge assets", thus facilitating re-use and sharing of designs and exposure of tacit curriculum design practice, was also found to be highly advantageous. However, further process improvements remain possible and evidence of system resistance was found in some stakeholder groups. Recommendations arising from the findings and conclusions include the need to improve data collection surrounding the curriculum approval process so that the process and human impact of C-CAP can be monitored and observed. Strategies for improving C-CAP acceptance among the "late majority", the need for C-CAP best practice guidance, and suggested protocols on the knowledge management of curriculum designs are proposed. Opportunities for further process improvements in institutional curriculum approval, including a re-engineering of post-faculty approval processes, are also recommended

    Workshop report: usability of digital libraries @ JCDL’02

    Get PDF

    Assessing the effectiveness of multi-touch interfaces for DP operation

    Get PDF
    Navigating a vessel using dynamic positioning (DP) systems close to offshore installations is a challenge. The operator's only possibility of manipulating the system is through its interface, which can be categorized as the physical appearance of the equipment and the visualization of the system. Are there possibilities of interaction between the operator and the system that can reduce strain and cognitive load during DP operations? Can parts of the system (e.g. displays) be physically brought closer to the user to enhance the feeling of control when operating the system? Can these changes make DP operations more efficient and safe? These questions inspired this research project, which investigates the use of multi-touch and hand gestures known from consumer products to directly manipulate the visualization of a vessel in the 3D scene of a DP system. Usability methodologies and evaluation techniques that are widely used in consumer market research were used to investigate how these interaction techniques, which are new to the maritime domain, could make interaction with the DP system more efficient and transparent both during standard and safety-critical operations. After investigating which gestures felt natural to use by running user tests with a paper prototype, the gestures were implemented into a Rolls-Royce DP system and tested in a static environment. The results showed that the test participants performed significantly faster using direct gesture manipulation compared to using traditional button/menu interaction. To support the results from these tests, further tests were carried out. The purpose is to investigate how gestures are performed in a moving environment, using a motion platform to simulate rough sea conditions. The key results and lessons learned from a collection of four user experiments, together with a discussion of the choice of evaluation techniques will be discussed in this paper

    Issues in Evaluating Health Department Web-Based Data Query Systems: Working Papers

    Get PDF
    Compiles papers on conceptual and methodological topics to consider in evaluating state health department systems that provide aggregate data online, such as taxonomy, logic models, indicators, and design. Includes surveys and examples of evaluations
    corecore