22,923 research outputs found

    Heuristic Evaluation for Serious Immersive Games and M-instruction

    Get PDF
    © Springer International Publishing Switzerland 2016. Two fast growing areas for technology-enhanced learning are serious games and mobile instruction (M-instruction or M-Learning). Serious games are ones that are meant to be more than just entertainment. They have a serious use to educate or promote other types of activity. Immersive Games frequently involve many players interacting in a shared rich and complex-perhaps web-based-mixed reality world, where their circumstances will be multi and varied. Their reality may be augmented and often self-composed, as in a user-defined avatar in a virtual world. M-instruction and M-Learning is learning on the move; much of modern computer use is via smart devices, pads, and laptops. People use these devices all over the place and thus it is a natural extension to want to use these devices where they are to learn. This presents a problem if we wish to evaluate the effectiveness of the pedagogic media they are using. We have no way of knowing their situation, circumstance, education background and motivation, or potentially of the customisation of the final software they are using. Getting to the end user itself may also be problematic; these are learning environments that people will dip into at opportune moments. If access to the end user is hard because of location and user self-personalisation, then one solution is to look at the software before it goes out. Heuristic Evaluation allows us to get User Interface (UI) and User Experience (UX) experts to reflect on the software before it is deployed. The effective use of heuristic evaluation with pedagogical software [1] is extended here, with existing Heuristics Evaluation Methods that make the technique applicable to Serious Immersive Games and mobile instruction (M-instruction). We also consider how existing Heuristic Methods may be adopted. The result represents a new way of making this methodology applicable to this new developing area of learning technology

    Heuristic usability evaluation on games: a modular approach

    Get PDF
    Heuristic evaluation is the preferred method to assess usability in games when experts conduct this evaluation. Many heuristics guidelines have been proposed attending to specificities of games but they only focus on specific subsets of games or platforms. In fact, to date the most used guideline to evaluate games usability is still Nielsen’s proposal, which is focused on generic software. As a result, most evaluations do not cover important aspects in games such as mobility, multiplayer interactions, enjoyability and playability, etc. To promote the usage of new heuristics adapted to different game and platform aspects we propose a modular approach based on the classification of existing game heuristics using metadata and a tool, MUSE (Meta-heUristics uSability Evaluation tool) for games, which allows a rebuild of heuristic guidelines based on metadata selection in order to obtain a customized list for every real evaluation case. The usage of these new rebuilt heuristic guidelines allows an explicit attendance to a wide range of usability aspects in games and a better detection of usability issues. We preliminarily evaluate MUSE with an analysis of two different games, using both the Nielsen’s heuristics and the customized heuristic lists generated by our tool.Unión Europea PI055-15/E0

    A Mixed Method Approach for Evaluating and Improving the Design of Learning in Puzzle Games

    Get PDF
    Despite the acknowledgment that learning is a necessary part of all gameplay, the area of Games User Research lacks an established evidence based method through which designers and researchers can understand, assess, and improve how commercial games teach players game-specific skills and information. In this paper, we propose a mixed method procedure that draws together both quantitative and experiential approaches to examine the extent to which players are supported in learning about the game world and mechanics. We demonstrate the method through presenting a case study of the game Portal involving 14 participants, who differed in terms of their gaming expertise. By comparing optimum solutions to puzzles against observed player performance, we illustrate how the method can indicate particular problems with how learning is structured within a game. We argue that the method can highlight where major breakdowns occur and yield design insights that can improve the player experience with puzzle games

    From Playability to a Hierarchical Game Usability Model

    Full text link
    This paper presents a brief review of current game usability models. This leads to the conception of a high-level game development-centered usability model that integrates current usability approaches in game industry and game research.Comment: 2 pages, 1 figur

    PLU-E: a proposed framework for planning and conducting evaluation studies with children.

    Get PDF
    While many models exist to support the design process of a software development project, the evaluation process is far less well defined and this lack of definition often leads to poorly designed evaluations, or the use of the wrong evaluation method. Evaluations of products for children can be especially complex as they need to consider the different requirements and aims that such a product may have, and often use new or developing evaluation methods. This paper takes the view that evaluations should be planned from the start of a project in order to yield the best results, and proposes a framework to facilitate this. This framework is particularly intended to support the varied and often conflicting requirements of a product designed for children, as defined by the PLU model, but could be adapted for other user groups

    A guidance and evaluation approach for mHealth education applications

    Get PDF
    © Springer International Publishing AG 2017. A growing number of mobile applications for health education are being utilized to support different stakeholders, from health professionals to software developers to patients and more general users. There is a lack of a critical evaluation framework to ensure the usability and reliability of these mobile health education applications (MHEAs). Such a framework would facilitate the saving of time and effort for the different user groups. This paper describes a framework for evaluating mobile applications for health education, including a guidance tool to help different stakeholders select the one most suitable for them. The framework is intended to meet the needs and requirements of the different user categories, as well as improving the development of MHEAs through software engineering approaches. A description of the evaluation framework is provided, with its efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) factors. Lastly, an account of the quantitative and qualitative results for the framework applied to the Medscape and other mobile apps is given. This proposed framework - an Evaluation Framework for Mobile Health Education Apps - consists of a hybrid of five metrics selected from a larger set during heuristic and usability evaluation, the choice being based on interviews with patients, software developers and health professionals

    A hybrid evaluation approach and guidance for mHealth education applications

    Get PDF
    © Springer International Publishing AG 2018. Mobile health education applications (MHEAs) are used to support different users. However, although these applications are increasing in number, there is no effective evaluation framework to measure their usability and thus save effort and time for their many user groups. This paper outlines a useful framework for evaluating MHEAs, together with particular evaluation metrics: an efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) factors to enable the determination of the usefulness and usability of MHEAs. We also propose a guidance tool to help stakeholders choose the most suitable MHEA. The outcome of this framework is envisioned as meeting the requirements of different users, in addition to enhancing the development of MHEAs using software engineering approaches by creating new and more effective evaluation techniques. Finally, we present qualitative and quantitative results for the framework when used with MHEAs

    Multi-agent quality of experience control

    Get PDF
    In the framework of the Future Internet, the aim of the Quality of Experience (QoE) Control functionalities is to track the personalized desired QoE level of the applications. The paper proposes to perform such a task by dynamically selecting the most appropriate Classes of Service (among the ones supported by the network), this selection being driven by a novel heuristic Multi-Agent Reinforcement Learning (MARL) algorithm. The paper shows that such an approach offers the opportunity to cope with some practical implementation problems: in particular, it allows to face the so-called “curse of dimensionality” of MARL algorithms, thus achieving satisfactory performance results even in the presence of several hundreds of Agents
    corecore