191,249 research outputs found

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    Mobile Application Usability: Heuristic Evaluation and Evaluation of Heuristics

    Get PDF
    Ger Joyce, Mariana Lilley, Trevor Barker, and Amanda Jefferies, 'Mobile Application Usability: Heuristic Evaluation and Evaluation of Heuristics', paper presented at AHFE 2016 International Conference on Human Factors, Software, and Systems Engineering. Walt Disney World, Florida USA, 27-31 July 2016Many traditional usability evaluation methods do not consider mobile-specific issues. This can result in mobile applications that abound in usability issues. We empirically evaluate three sets of usability heuristics for use with mobile applications, including a set defined by the authors. While the set of heuristics defined by the authors surface more usability issues in a mobile application than other sets of heuristics, improvements to the set can be made

    A Comparison of Quantitative and Qualitative Data from a Formative Usability Evaluation of an Augmented Reality Learning Scenario

    Get PDF
    The proliferation of augmented reality (AR) technologies creates opportunities for the devel-opment of new learning scenarios. More recently, the advances in the design and implementation of desktop AR systems make it possible the deployment of such scenarios in primary and secondary schools. Usability evaluation is a precondition for the pedagogical effectiveness of these new technologies and requires a systematic approach for finding and fixing usability problems. In this paper we present an approach to a formative usability evaluation based on heuristic evaluation and user testing. The basic idea is to compare and integrate quantitative and qualitative measures in order to increase confidence in results and enhance the descriptive power of the usability evaluation report.augmented reality, multimodal interaction, e-learning, formative usability evaluation, user testing, heuristic evaluation

    Principles in Patterns (PiP) : Heuristic Evaluation of Course and Class Approval Online Pilot (C-CAP)

    Get PDF
    The PiP Evaluation Plan documents four distinct evaluative strands, the first of which entails an evaluation of the PiP system pilot (WP7:37). Phase 1 of this evaluative strand focuses on the heuristic evaluation of the PiP Course and Class Approval Online Pilot system (C-CAP). Heuristic evaluation is an established usability inspection and testing technique and is most commonly deployed in Human-Computer Interaction (HCI) research, e.g. to test user interface designs, technology systems testing, etc. The success of heuristic evaluation in detecting 'major' and 'minor' usability problems is well documented, but its principal limitation is its inability to capture data on all possible usability problems. For this reason heuristic evaluation is often used as a precursor to user testing, e.g. so that user testing focuses on deeper system issues rather than on those that can easily be debugged. Heuristic evaluation nevertheless remains an important usability inspection technique and research continues to demonstrate its success in detecting usability problems which would otherwise evade detection in user testing sessions. For this reason experts maintain that heuristic evaluation should be used to complement user testing. This is reflected in the PiP Evaluation Plan, which proposes protocol analysis, stimulated recall and pre- and post-test questionnaire instruments to comprise user testing (see WP7:37 phases 2, 3 and 4 of PiP Evaluation Plan). This brief report summarises the methodology deployed, presents the results of the heuristic evaluation and proposes solutions or recommendations to address the heuristic violations that were found to exist in the C-CAP system. It is anticipated that some solutions will be implemented within the lifetime of the project. This is consistent with the incremental systems design methodology that PiP has adopted

    Introduction to the new usability

    Get PDF
    This paper introduces the motivation for and concept of the "new usability" and positions it against existing approaches to usability. It is argued that the contexts of emerging products and systems mean that traditional approaches to usability engineering and evaluation are likely to prove inappropriate to the needs of "digital consumers." The paper briefly reviews the contributions to this special issue in terms of their relation to the idea of the "new usability" and their individual approaches to dealing with contemporary usability issues. This helps provide a background to the "new usability" research agenda, and the paper ends by posing what are argued to be the central challenges facing the area and those which lie at the heart of the proposed research agenda

    FEeSU - A Framework for Evaluating eHealth Systems Usability: A Case of Tanzania Health Facilities

    Get PDF
    Adopting eHealth systems in the health sector has changed the means of providing health services and increased the quality of service in many countries. The usability of these systems needs to be evaluated from time to time to reduce or completely avoid the possibility of jeopardizing the patients’ data, medication errors, etc. However, the existing frameworks are not country context sensitive since they are designed with the mindset of practices in developed countries. Such developed countries’ contexts have different cultures, resource settings, and levels of computer literacy compared to developing countries such as Tanzania. This paper presents the framework for evaluating eHealth system usability (FEeSU) that is designed with a focus on developing country contexts and tested in Tanzania. Healthcare professionals, including doctors, nurses, laboratory technologists, and pharmacists, were the main participants in this research to acquire practice-oriented requirements based on their experience, best practices, and healthcare norms. The framework comprises six steps to be followed in the evaluation process. These steps are associated with important components, including usability metrics, stakeholders, usability evaluation methods, and contextual issues necessary for usability evaluation. The proposed usability evaluation framework could be used as guidelines by different e-health system stakeholders when preparing, designing, and performing the evaluation of the usability of a system. Keywords: Usability metrics, Usability evaluation Contextual issues eHealth systems Framework for usability evaluation FEeSU. DOI: 10.7176/CEIS/10-1-01 Publication date:September 30th 202

    Usability perception of the health information systems in Brazil: the view of hospital health professionals on the electronic health record

    Get PDF
    Purpose The purpose of this paper is to validate and measure the overall evaluation of electronic health record (EHR) and identify the factors that influence the health information systems (HIS) assessment in Brazil. Design/methodology/approach From February to May 2020, this study surveyed 262 doctors and nurses who work in hospitals and use the EHR in their workplace. This study validated the National Usability-focused HIS Scale (NuHISS) to measure usability in the Brazilian context. Findings The results showed adequate validity and reliability, validating the NuHISS in the Brazilian context. The survey showed that 38.9% of users rated the system as high quality. Technical quality, ease of use and benefits explained 43.5% of the user’s overall system evaluation. Research limitations/implications This study validated the items that measure usability of health-care systems and identified that not all usability items impact the overall evaluation of the EHR. Practical implications NuHISS can be a valuable tool to measure HIS usability for doctors and nurses and monitor health systems’ long-term usability among health professionals. The results suggest dissatisfaction with the usability of HIS systems, specifically the EHR in hospital units. For this reason, those responsible for health systems must observe usability. This tool enables usability monitoring to highlight information system deficiencies for public managers. Furthermore, the government can create and develop actions to improve the existing tools to support health professionals. Social implications From the scale validation, public managers could monitor and develop actions to foster the system’s usability, especially the system’s technical qualities – the factor that impacted the overall system evaluation. Originality/value To the best of the authors’ knowledge, this study is the first to validate the usability scale of EHR systems in Brazil. The results showed dissatisfaction with HIS and identified the factors that most influence the system evaluation

    A user-centred personalised e-learning system

    Get PDF
    The paper proposes a framework for understanding the factors that affect usability of e-learning. The framework can be applied to the development of (1) a formative usability evaluation method for e-learning systems and (2) personalisation rules for e-learning systems interface. The formative usability evaluation method is intended for the evaluation of e-learning systems during its development stages, from screen-based prototypes to near completion. The evaluation criteria will be customisable depending on contingent criteria such as user characteristics and e-learning system characteristics. A web-based prototype will be developed to allow the convenient implementation of the methodology. The personalisation rules for e-learning system is intended for the automatic adaptation of e-learning systems' interface to different users' preferences in order to maximise its usability and learnability for individual users

    Comparing the usability of doodle and Mikon images to be used as authenticators in graphical authentication systems

    Get PDF
    Recognition-based graphical authentication systems rely on the recognition of authenticator images by legitimate users for authentication. This paper presents the results of a study that compared doodle images and Mikon images as authenticators in recognition based graphical authentication systems taking various usability dimensions into account. The results of the usability evaluation, with 20 participants, demonstrated that users preferred Mikon to doodle images as authenticators in recognition based graphical authentication mechanisms. Furthermore, participants found it difficult to recognize doodle images during authentication as well as associate them with something meaningful. Our findings also show the need to consider the security offered by the images, especially their predictability
    corecore