1,460 research outputs found

    Heuristic Evaluation of Play4Fit Health and Fitness App: A Comparison Between Experts and Novices Evaluators

    Get PDF
    Heuristic evaluation (HE) can be used to effectively identify usability issues in various interfaces. However, it has not been widely used in evaluating smartphone apps, especially in the health and fitness domain. One reason is the lack of HCI experts, which makes incorporating HE into the design process difficult. This paper presents the results of a study that compared HE performed by three HCI experts and three novices in evaluating a gamification app for health and fitness on a smartphone. The study used Smartphone Mobile Application heuRisTics (SMART), which focuses on smartphone apps, and a severity rating scale to determine the severity of the usability issues. These issues were mapped to the SMART heuristic. The findings indicate that novices may identify usability issues that the experts overlooked. While the experts identified eighteen usability issues, the novices found only four; however, the novice’s findings may be used as a substitute for HE when experts are unavailable. Both experts and novices identified two similar usability issues, but their severity ratings differed. One possible solution to address the lack of usability issues identified by novices in HE is to use more novices instead of experts in the evaluation process

    Heuristic Evaluation of Play4Fit Health and Fitness App: A Comparison Between Experts and Novices Evaluators

    Get PDF
    Heuristic evaluation (HE) can be used to effectively identify usability issues in various interfaces. However, it has not been widely used in evaluating smartphone apps, especially in the health and fitness domain. One reason is the lack of HCI experts, which makes incorporating HE into the design process difficult. This paper presents the results of a study that compared HE performed by three HCI experts and three novices in evaluating a gamification app for health and fitness on a smartphone. The study used Smartphone Mobile Application heuRisTics (SMART), which focuses on smartphone apps, and a severity rating scale to determine the severity of the usability issues. These issues were mapped to the SMART heuristic. The findings indicate that novices may identify usability issues that the experts overlooked. While the experts identified eighteen usability issues, the novices found only four; however, the novice’s findings may be used as a substitute for HE when experts are unavailable. Both experts and novices identified two similar usability issues, but their severity ratings differed. One possible solution to address the lack of usability issues identified by novices in HE is to use more novices instead of experts in the evaluation process

    Heuristic evaluation: Comparing ways of finding and reporting usability problems

    Get PDF
    Research on heuristic evaluation in recent years has focused on improving its effectiveness and efficiency with respect to user testing. The aim of this paper is to refine a research agenda for comparing and contrasting evaluation methods. To reach this goal, a framework is presented to evaluate the effectiveness of different types of support for structured usability problem reporting. This paper reports on an empirical study of this framework that compares two sets of heuristics, Nielsen's heuristics and the cognitive principles of Gerhardt-Powals, and two media of reporting a usability problem, i.e. either using a web tool or paper. The study found that there were no significant differences between any of the four groups in effectiveness, efficiency and inter-evaluator reliability. A more significant contribution of this research is that the framework used for the experiments proved successful and should be reusable by other researchers because of its thorough structur

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method

    Creativity of images: using digital consensual assessment to evaluate mood boards

    Get PDF
    Mood boards are used frequently in design and product development as well as in academic courses related to fashion design. However objectively evaluating the creativity of fashion design mood boards is often difficult. Therefore, the purpose of this investigation is to examine reliability of a digital consensual assessment instrument measuring creativity, using expert (from related domains) and non-expert raters (students). Creativity measures were compared with the mood board themes to further investigate any relationships between mood board types and the consensual assessment. Independent samples t test comparing group means indicated expert raters evaluated the mood boards significantly higher in creativity than the non-experts, t (99) = −6.71, p \u3c .001, (95% CI −.57, −.29), while Pearson correlation results indicate a significant relationship between the two groups of raters, r (50) = .33, p \u3c .01. ANOVA results for all raters indicated a significant difference between the five subject matter categories; F (4, 95) = 4.64, p \u3c .005. Overall, expert and non-expert raters reported significant reliability, which further supports prior research using consensual assessment for creativity measures across domains

    Does Time Heal?:A Longitudinal Study of Usability

    Get PDF

    Human Computer Interaction and Web-Based Learning Platforms: e-Learning Website Features vis-à-vis Student Perception

    Get PDF
    The utilisation of web-based e-learning platforms is increasing throughout the Kingdom of Saudi Arabia. The majority of these platforms were developed initially by institutions in the West; only later were menus and icons translated into Arabic to assist Arabic-speaking students. Users have observed that during the development, adaptation, and implementation (adoption) stages, insufficient attention was directed toward usability. Within the industry it is common practice to apply Nielsen’ heuristics, as a measure of usability, to designs intended for business or commercial uses, these heuristics are considered a standard measure. This study focuses on the application of Nielsen’s heuristics to web based learning platforms to evaluate usability. The aim is to understand and evaluate the usability of these applications from the perspective of students and to compare and contrast these with the findings of a Heuristic evaluation of these platforms by groups of professionals. The study includes the development of a usability guideline framework and an extensive set of criteria to be applied to evaluate web based learning platforms (WBLP). The analysis of the data collected and applying the heuristic evaluation of experts demonstrate that a high correspondence with previous sources. The research concludes that a heuristic evaluation, based on Nielsen’s model, is an effective, appropriate and sufficient usability evaluation method, as well as a relatively easy tool. It also identified a high percentage of usability problems in the target WBLP, Arabic version of Blackboard, which contributes to part of research conclusions
    corecore