3,325 research outputs found

    Heuristic usability evaluation on games: a modular approach

    Get PDF
    Heuristic evaluation is the preferred method to assess usability in games when experts conduct this evaluation. Many heuristics guidelines have been proposed attending to specificities of games but they only focus on specific subsets of games or platforms. In fact, to date the most used guideline to evaluate games usability is still Nielsen’s proposal, which is focused on generic software. As a result, most evaluations do not cover important aspects in games such as mobility, multiplayer interactions, enjoyability and playability, etc. To promote the usage of new heuristics adapted to different game and platform aspects we propose a modular approach based on the classification of existing game heuristics using metadata and a tool, MUSE (Meta-heUristics uSability Evaluation tool) for games, which allows a rebuild of heuristic guidelines based on metadata selection in order to obtain a customized list for every real evaluation case. The usage of these new rebuilt heuristic guidelines allows an explicit attendance to a wide range of usability aspects in games and a better detection of usability issues. We preliminarily evaluate MUSE with an analysis of two different games, using both the Nielsen’s heuristics and the customized heuristic lists generated by our tool.Unión Europea PI055-15/E0

    MRPR: a MapReduce solution for prototype reduction in big data classification

    Get PDF
    In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data

    Investigation on prototype learning.

    Get PDF
    Keung Chi-Kin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2000.Includes bibliographical references (leaves 128-135).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Classification --- p.2Chapter 1.2 --- Instance-Based Learning --- p.4Chapter 1.2.1 --- Three Basic Components --- p.5Chapter 1.2.2 --- Advantages --- p.6Chapter 1.2.3 --- Disadvantages --- p.7Chapter 1.3 --- Thesis Contributions --- p.7Chapter 1.4 --- Thesis Organization --- p.8Chapter 2 --- Background --- p.10Chapter 2.1 --- Improving Instance-Based Learning --- p.10Chapter 2.1.1 --- Scaling-up Nearest Neighbor Searching --- p.11Chapter 2.1.2 --- Data Reduction --- p.12Chapter 2.2 --- Prototype Learning --- p.12Chapter 2.2.1 --- Objectives --- p.13Chapter 2.2.2 --- Two Types of Prototype Learning --- p.15Chapter 2.3 --- Instance-Filtering Methods --- p.15Chapter 2.3.1 --- Retaining Border Instances --- p.16Chapter 2.3.2 --- Removing Border Instances --- p.21Chapter 2.3.3 --- Retaining Center Instances --- p.22Chapter 2.3.4 --- Advantages --- p.23Chapter 2.3.5 --- Disadvantages --- p.24Chapter 2.4 --- Instance-Abstraction Methods --- p.25Chapter 2.4.1 --- Advantages --- p.30Chapter 2.4.2 --- Disadvantages --- p.30Chapter 2.5 --- Other Methods --- p.32Chapter 2.6 --- Summary --- p.34Chapter 3 --- Integration of Filtering and Abstraction --- p.36Chapter 3.1 --- Incremental Integration --- p.37Chapter 3.1.1 --- Motivation --- p.37Chapter 3.1.2 --- The Integration Method --- p.40Chapter 3.1.3 --- Issues --- p.41Chapter 3.2 --- Concept Integration --- p.42Chapter 3.2.1 --- Motivation --- p.43Chapter 3.2.2 --- The Integration Method --- p.44Chapter 3.2.3 --- Issues --- p.45Chapter 3.3 --- Difference between Integration Methods and Composite Clas- sifiers --- p.48Chapter 4 --- The PGF Framework --- p.49Chapter 4.1 --- The PGF1 Algorithm --- p.50Chapter 4.1.1 --- Instance-Filtering Component --- p.51Chapter 4.1.2 --- Instance-Abstraction Component --- p.52Chapter 4.2 --- The PGF2 Algorithm --- p.56Chapter 4.3 --- Empirical Analysis --- p.57Chapter 4.3.1 --- Experimental Setup --- p.57Chapter 4.3.2 --- Results of PGF Algorithms --- p.59Chapter 4.3.3 --- Analysis of PGF1 --- p.61Chapter 4.3.4 --- Analysis of PGF2 --- p.63Chapter 4.3.5 --- Overall Behavior of PGF --- p.66Chapter 4.3.6 --- Comparisons with Other Approaches --- p.69Chapter 4.4 --- Time Complexity --- p.72Chapter 4.4.1 --- Filtering Components --- p.72Chapter 4.4.2 --- Abstraction Component --- p.74Chapter 4.4.3 --- PGF Algorithms --- p.74Chapter 4.5 --- Summary --- p.75Chapter 5 --- Integrated Concept Prototype Learner --- p.77Chapter 5.1 --- Motivation --- p.78Chapter 5.2 --- Abstraction Component --- p.80Chapter 5.2.1 --- Issues for Abstraction --- p.80Chapter 5.2.2 --- Investigation on Typicality --- p.82Chapter 5.2.3 --- Typicality in Abstraction --- p.85Chapter 5.2.4 --- The TPA algorithm --- p.86Chapter 5.2.5 --- Analysis of TPA --- p.90Chapter 5.3 --- Filtering Component --- p.93Chapter 5.3.1 --- Investigation on Associate --- p.96Chapter 5.3.2 --- The RT2 Algorithm --- p.100Chapter 5.3.3 --- Analysis of RT2 --- p.101Chapter 5.4 --- Concept Integration --- p.103Chapter 5.4.1 --- The ICPL Algorithm --- p.104Chapter 5.4.2 --- Analysis of ICPL --- p.106Chapter 5.5 --- Empirical Analysis --- p.106Chapter 5.5.1 --- Experimental Setup --- p.106Chapter 5.5.2 --- Results of ICPL Algorithm --- p.109Chapter 5.5.3 --- Comparisons with Pure Abstraction and Pure Filtering --- p.110Chapter 5.5.4 --- Comparisons with Other Approaches --- p.114Chapter 5.6 --- Time Complexity --- p.119Chapter 5.7 --- Summary --- p.120Chapter 6 --- Conclusions and Future Work --- p.122Chapter 6.1 --- Conclusions --- p.122Chapter 6.2 --- Future Work --- p.126Bibliography --- p.128Chapter A --- Detailed Information for Tested Data Sets --- p.136Chapter B --- Detailed Experimental Results for PGF --- p.13

    Personalised trails and learner profiling within e-learning environments

    Get PDF
    This deliverable focuses on personalisation and personalised trails. We begin by introducing and defining the concepts of personalisation and personalised trails. Personalisation requires that a user profile be stored, and so we assess currently available standard profile schemas and discuss the requirements for a profile to support personalised learning. We then review techniques for providing personalisation and some systems that implement these techniques, and discuss some of the issues around evaluating personalisation systems. We look especially at the use of learning and cognitive styles to support personalised learning, and also consider personalisation in the field of mobile learning, which has a slightly different take on the subject, and in commercially available systems, where personalisation support is found to currently be only at quite a low level. We conclude with a summary of the lessons to be learned from our review of personalisation and personalised trails
    • 

    corecore