29,809 research outputs found

    Understanding the fidelity effect when evaluating games with children

    Get PDF
    There have been a number of studies that have compared evaluation results from prototypes of different fidelities but very few of these are with children. This paper reports a comparative study of three prototypes ranging from low fidelity to high fidelity within the context of mobile games, using a between subject design with 37 participants aged 7 to 9. The children played a matching game on either an iPad, a paper prototype using screen shots of the actual game or a sketched version. Observational data was captured to establish the usability problems, and two tools from the Fun Toolkit were used to measure user experience. The results showed that there was little difference for user experience between the three prototypes and very few usability problems were unique to a specific prototype. The contribution of this paper is that children using low-fidelity prototypes can effectively evaluate games of this genre and style

    Heuristic usability evaluation on games: a modular approach

    Get PDF
    Heuristic evaluation is the preferred method to assess usability in games when experts conduct this evaluation. Many heuristics guidelines have been proposed attending to specificities of games but they only focus on specific subsets of games or platforms. In fact, to date the most used guideline to evaluate games usability is still Nielsen’s proposal, which is focused on generic software. As a result, most evaluations do not cover important aspects in games such as mobility, multiplayer interactions, enjoyability and playability, etc. To promote the usage of new heuristics adapted to different game and platform aspects we propose a modular approach based on the classification of existing game heuristics using metadata and a tool, MUSE (Meta-heUristics uSability Evaluation tool) for games, which allows a rebuild of heuristic guidelines based on metadata selection in order to obtain a customized list for every real evaluation case. The usage of these new rebuilt heuristic guidelines allows an explicit attendance to a wide range of usability aspects in games and a better detection of usability issues. We preliminarily evaluate MUSE with an analysis of two different games, using both the Nielsen’s heuristics and the customized heuristic lists generated by our tool.Unión Europea PI055-15/E0

    Effective Affective User Interface Design in Games

    Get PDF
    It is proposed that games, which are designed to generate positive affect, are most successful when they facilitate flow (Csikszentmihalyi 1992). Flow is a state of concentration, deep enjoyment, and total absorption in an activity. The study of games, and a resulting understanding of flow in games can inform the design of nonleisure software for positive affect. The paper considers the ways in which computer games contravene Nielsen’s guidelines for heuristic evaluation (Nielsen and Molich 1990) and how these contraventions impact on flow. The paper also explores the implications for research that stem from the differences between games played on a personal computer and games played on a dedicated console. This research takes important initial steps towards defining how flow in computer games can inform affective design

    From Playability to a Hierarchical Game Usability Model

    Full text link
    This paper presents a brief review of current game usability models. This leads to the conception of a high-level game development-centered usability model that integrates current usability approaches in game industry and game research.Comment: 2 pages, 1 figur

    Heuristic Evaluation for Serious Immersive Games and M-instruction

    Get PDF
    © Springer International Publishing Switzerland 2016. Two fast growing areas for technology-enhanced learning are serious games and mobile instruction (M-instruction or M-Learning). Serious games are ones that are meant to be more than just entertainment. They have a serious use to educate or promote other types of activity. Immersive Games frequently involve many players interacting in a shared rich and complex-perhaps web-based-mixed reality world, where their circumstances will be multi and varied. Their reality may be augmented and often self-composed, as in a user-defined avatar in a virtual world. M-instruction and M-Learning is learning on the move; much of modern computer use is via smart devices, pads, and laptops. People use these devices all over the place and thus it is a natural extension to want to use these devices where they are to learn. This presents a problem if we wish to evaluate the effectiveness of the pedagogic media they are using. We have no way of knowing their situation, circumstance, education background and motivation, or potentially of the customisation of the final software they are using. Getting to the end user itself may also be problematic; these are learning environments that people will dip into at opportune moments. If access to the end user is hard because of location and user self-personalisation, then one solution is to look at the software before it goes out. Heuristic Evaluation allows us to get User Interface (UI) and User Experience (UX) experts to reflect on the software before it is deployed. The effective use of heuristic evaluation with pedagogical software [1] is extended here, with existing Heuristics Evaluation Methods that make the technique applicable to Serious Immersive Games and mobile instruction (M-instruction). We also consider how existing Heuristic Methods may be adopted. The result represents a new way of making this methodology applicable to this new developing area of learning technology

    PLU-E: a proposed framework for planning and conducting evaluation studies with children.

    Get PDF
    While many models exist to support the design process of a software development project, the evaluation process is far less well defined and this lack of definition often leads to poorly designed evaluations, or the use of the wrong evaluation method. Evaluations of products for children can be especially complex as they need to consider the different requirements and aims that such a product may have, and often use new or developing evaluation methods. This paper takes the view that evaluations should be planned from the start of a project in order to yield the best results, and proposes a framework to facilitate this. This framework is particularly intended to support the varied and often conflicting requirements of a product designed for children, as defined by the PLU model, but could be adapted for other user groups

    Assessing fun: young children as evaluators of interactive systems.

    Get PDF
    In this paper, we describe an exploratory study on the challenges of conducting usability tests with very young children aged 3 to 4 years old (nursery age) and the differences when working with older children aged 5 to 6 years old (primary school). A pilot study was conducted at local nursery and primary schools to understand and experience the challenges working with young children interacting with computer products. We report on the studies and compare the experiences of working with children of different age groups in evaluation studies of interactive systems
    corecore