16,686 research outputs found

    Evaluating system utility and conceptual fit using CASSM

    Get PDF
    There is a wealth of user-centred evaluation methods (UEMs) to support the analyst in assessing interactive systems. Many of these support detailed aspects of use – for example: Is the feedback helpful? Are labels appropriate? Is the task structure optimal? Few UEMs encourage the analyst to step back and consider how well a system supports users’ conceptual understandings and system utility. In this paper, we present CASSM, a method which focuses on the quality of ‘fit’ between users and an interactive system. We describe the methodology of conducting a CASSM analysis and illustrate the approach with three contrasting worked examples (a robotic arm, a digital library system and a drawing tool) that demonstrate different depths of analysis. We show how CASSM can help identify re-design possibilities to improve system utility. CASSM complements established evaluation methods by focusing on conceptual structures rather than procedures. Prototype tool support for completing a CASSM analysis is provided by Cassata, an open source development

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    THE EFFECTS OF ANTHROPOMORPHISM AND AFFECTIVE DESIGN PRINCIPLES ON THE ADOPTION OF M-HEALTH APPLICATIONS

    Get PDF
    Published ThesisPrevious research has found that M-Health initiatives have not been adopted and used effectively in many cases, especially in rural communal locations. Based on this, the researcher has surmised that factors contributing to the non-use of such initiative could be the resulted of a lack of knowledge with regard to the use of technology, literacy challenges, possible fear of technology and a lack of information regarding interventions that have the potential to improve quality of life. Consequently, an initiative that has usability as its core function may play a critical role in the use and adoption of such technologies. The researcher wondered if and how anthropomorphic and affective design principles which aspire to extract an emotional or positively reinforced sub-conscious reaction from users may influence the adoption and use of M-Health initiatives when applied to said interventions. This study therefore set out to investigate the effects of anthropomorphism and affective design principles on the adoption of M-Health applications, with the Sethakeng rural community in the Northern Cape province of South Africa research population after consent was obtained from the relevant community leaders. The researcher wanted first to ascertain whether anthropomorphism and affective design could influence the adoption of Mobile-Health applications, then to identify which was the more effective method to design Mobile-Health applications and finally, to provide guidelines and recommendations about the most effective design theory, as identified in the study, when designing applications. This study predominantly employed a mixed approach research methodology which included action research cycles and quantitative data in the form of usage statistics, obtained from CloudWare, in the final report. A case study was conducted in a rural South African setting to explore and eventually understand the relation between the case community and the intervention. A qualitative research design best allowed the researcher to get a better understanding of the research problem identified and the obstacles facing the relevant rural community and quantitative data assisted with better understanding the relevant usage trends in terms of the M-Health intervention. The objectives of the case study were to observe the phenomenon and describe it with regards to the case community, document the reactions of the case community to different instances and variations of the phenomenon and, lastly, to report on the design principle that yielded the most positive reaction from the community from a usage perspective; thereby indicating the adoption of the design methodology employed. The research contributed towards the successful development, placement and scrutiny of two emotion-driven interfaces for the same M-Health intervention. A distinctive perspective was provided with regard to affective and anthropomorphic design to identify the better design model for improved application acceptance in a rural community context. At the conclusion of the study, evidence suggested that community members found the anthropomorphic interface design superior. The researcher was thus able to explore, identify, develop and list a set of guidelines that can be used in the area of emotional design. Each guideline was based on what worked in practice and was applied successfully throughout this study. The researcher would like these guidelines be implemented and utilised by other designers in the field of interaction design for future designers

    User-driven design of decision support systems for polycentric environmental resources management

    Get PDF
    Open and decentralized technologies such as the Internet provide increasing opportunities to create knowledge and deliver computer-based decision support for multiple types of users across scales. However, environmental decision support systems/tools (henceforth EDSS) are often strongly science-driven and assuming single types of decision makers, and hence poorly suited for more decentralized and polycentric decision making contexts. In such contexts, EDSS need to be tailored to meet diverse user requirements to ensure that it provides useful (relevant), usable (intuitive), and exchangeable (institutionally unobstructed) information for decision support for different types of actors. To address these issues, we present a participatory framework for designing EDSS that emphasizes a more complete understanding of the decision making structures and iterative design of the user interface. We illustrate the application of the framework through a case study within the context of water-stressed upstream/downstream communities in Lima, Peru

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method
    corecore