8,108 research outputs found

    Principles in Patterns (PiP) : Heuristic Evaluation of Course and Class Approval Online Pilot (C-CAP)

    Get PDF
    The PiP Evaluation Plan documents four distinct evaluative strands, the first of which entails an evaluation of the PiP system pilot (WP7:37). Phase 1 of this evaluative strand focuses on the heuristic evaluation of the PiP Course and Class Approval Online Pilot system (C-CAP). Heuristic evaluation is an established usability inspection and testing technique and is most commonly deployed in Human-Computer Interaction (HCI) research, e.g. to test user interface designs, technology systems testing, etc. The success of heuristic evaluation in detecting 'major' and 'minor' usability problems is well documented, but its principal limitation is its inability to capture data on all possible usability problems. For this reason heuristic evaluation is often used as a precursor to user testing, e.g. so that user testing focuses on deeper system issues rather than on those that can easily be debugged. Heuristic evaluation nevertheless remains an important usability inspection technique and research continues to demonstrate its success in detecting usability problems which would otherwise evade detection in user testing sessions. For this reason experts maintain that heuristic evaluation should be used to complement user testing. This is reflected in the PiP Evaluation Plan, which proposes protocol analysis, stimulated recall and pre- and post-test questionnaire instruments to comprise user testing (see WP7:37 phases 2, 3 and 4 of PiP Evaluation Plan). This brief report summarises the methodology deployed, presents the results of the heuristic evaluation and proposes solutions or recommendations to address the heuristic violations that were found to exist in the C-CAP system. It is anticipated that some solutions will be implemented within the lifetime of the project. This is consistent with the incremental systems design methodology that PiP has adopted

    Qualitative analysis of academic group and discussion forum on Facebook

    Get PDF
    In the present study, data was triangulated and two methods of data analysis were used. Qualitative analysis was undertaken of free-text data from students’ reflective essaysto extract socially-related themes. Heuristic evaluation was conducted by expert evaluators, who investigated forum contributions and discourse in line with contemporary learning theory and considered the social\ud culture of participation. Findings of the qualitative analysis of students’ perceptions and results of the\ud heuristic evaluation of forum participation confirmed each other, indicating a warm social climate and a conducive, well-facilitated environment that supported individual styles of participation. It fostered interpersonal relationships between distance learners, as well as study-related benefits enhanced by peer teaching and insights acquired in a culture of social negotiation. The environment was effectively moderated, while supporting student-initiative.\u

    Mobile Application Usability: Heuristic Evaluation and Evaluation of Heuristics

    Get PDF
    Ger Joyce, Mariana Lilley, Trevor Barker, and Amanda Jefferies, 'Mobile Application Usability: Heuristic Evaluation and Evaluation of Heuristics', paper presented at AHFE 2016 International Conference on Human Factors, Software, and Systems Engineering. Walt Disney World, Florida USA, 27-31 July 2016Many traditional usability evaluation methods do not consider mobile-specific issues. This can result in mobile applications that abound in usability issues. We empirically evaluate three sets of usability heuristics for use with mobile applications, including a set defined by the authors. While the set of heuristics defined by the authors surface more usability issues in a mobile application than other sets of heuristics, improvements to the set can be made

    Heuristic Evaluation for Serious Immersive Games and M-instruction

    Get PDF
    © Springer International Publishing Switzerland 2016. Two fast growing areas for technology-enhanced learning are serious games and mobile instruction (M-instruction or M-Learning). Serious games are ones that are meant to be more than just entertainment. They have a serious use to educate or promote other types of activity. Immersive Games frequently involve many players interacting in a shared rich and complex-perhaps web-based-mixed reality world, where their circumstances will be multi and varied. Their reality may be augmented and often self-composed, as in a user-defined avatar in a virtual world. M-instruction and M-Learning is learning on the move; much of modern computer use is via smart devices, pads, and laptops. People use these devices all over the place and thus it is a natural extension to want to use these devices where they are to learn. This presents a problem if we wish to evaluate the effectiveness of the pedagogic media they are using. We have no way of knowing their situation, circumstance, education background and motivation, or potentially of the customisation of the final software they are using. Getting to the end user itself may also be problematic; these are learning environments that people will dip into at opportune moments. If access to the end user is hard because of location and user self-personalisation, then one solution is to look at the software before it goes out. Heuristic Evaluation allows us to get User Interface (UI) and User Experience (UX) experts to reflect on the software before it is deployed. The effective use of heuristic evaluation with pedagogical software [1] is extended here, with existing Heuristics Evaluation Methods that make the technique applicable to Serious Immersive Games and mobile instruction (M-instruction). We also consider how existing Heuristic Methods may be adopted. The result represents a new way of making this methodology applicable to this new developing area of learning technology

    A Comparison of Quantitative and Qualitative Data from a Formative Usability Evaluation of an Augmented Reality Learning Scenario

    Get PDF
    The proliferation of augmented reality (AR) technologies creates opportunities for the devel-opment of new learning scenarios. More recently, the advances in the design and implementation of desktop AR systems make it possible the deployment of such scenarios in primary and secondary schools. Usability evaluation is a precondition for the pedagogical effectiveness of these new technologies and requires a systematic approach for finding and fixing usability problems. In this paper we present an approach to a formative usability evaluation based on heuristic evaluation and user testing. The basic idea is to compare and integrate quantitative and qualitative measures in order to increase confidence in results and enhance the descriptive power of the usability evaluation report.augmented reality, multimodal interaction, e-learning, formative usability evaluation, user testing, heuristic evaluation

    The assessment of usability of electronic shopping: A heuristic evaluation

    Get PDF
    Today there are thousands of electronic shops accessible via the Web. Some provide user-friendly features whilst others seem not to consider usability factors at all. Yet, it is critical that the electronic shopping interface is user-friendly so as to help users to obtain their desired results. This study applied heuristic evaluation to examine the usability of current electronic shopping. In particular, it focused on four UK-based supermarkets offering electronic services: including ASDA, Iceland, Sainsbury, and Tesco. The evaluation consists of two stages: a free-flow inspection and a task-based inspection. The results indicate that the most significant and common usability problems have been found to lie within the areas of ‘User Control and Freedom’ and ‘Help and Documentation’. The findings of this study are applied to develop a set of usability guidelines to support the future design of effective interfaces for electronic shopping
    • 

    corecore