329 research outputs found
The experience-driven perspective
Ultimately, content is generated for the player. But so far, our algorithms
have not taken specific players into account. Creating computational models of a
player’s behaviour, preferences, or skills is called player modelling. With a model
of the player, we can create algorithms that create content specifically tailored to
that player. The experience-driven perspective on procedural content generation provides
a framework for content generation based on player modelling; one of the most
important ways of doing this is to use a player model in the evaluation function for
search-based PCG. This chapter discusses different ways of collecting and encoding
data about the player, primarily player experience, and ways of modelling this data.
It also gives examples of different ways in which such models can be used.peer-reviewe
Extending neuro-evolutionary preference learning through player modelling
In this paper we propose a methodology for improving the accuracy of models that predict self-reported player pairwise preferences. Our approach extends neuro-evolutionary preference learning by embedding a player modeling module for the prediction of player preferences. Player types are identified using self-organization and feed the preference learner. Our experiments on a dataset derived from a game survey of subjects playing a 3D prey/predator game demonstrate that the player model-driven preference learning approach proposed improves the performance of preference learning significantly and shows promise for the construction of more accurate cognitive and affective models.peer-reviewe
Exploring Apprenticeship Learning for Player Modelling in Interactive Narratives
In this paper we present an early Apprenticeship Learning approach to mimic
the behaviour of different players in a short adaption of the interactive
fiction Anchorhead. Our motivation is the need to understand and simulate
player behaviour to create systems to aid the design and personalisation of
Interactive Narratives (INs). INs are partially observable for the players and
their goals are dynamic as a result. We used Receding Horizon IRL (RHIRL) to
learn players' goals in the form of reward functions, and derive policies to
imitate their behaviour. Our preliminary results suggest that RHIRL is able to
learn action sequences to complete a game, and provided insights towards
generating behaviour more similar to specific players.Comment: Extended Abstracts of the 2019 Annual Symposium on Computer-Human
Interaction in Play (CHI Play
Emulating Human Play in a Leading Mobile Card Game
Monte Carlo Tree Search (MCTS) has become a popular solution for game AI, capable of creating strong game playing opponents. However, the emergent playstyle of agents using MCTS is not neces- sarily human-like, believable or enjoyable. AI Factory Spades, currently the top rated Spades game in the Google Play store, uses a variant of MCTS to control AI allies and opponents. In collaboration with the developers, we showed in a previous study that the playstyle of human players significantly differed from that of the AI players [1]. This article presents a method for player modelling using gameplay data and neural networks that does not require domain knowledge, and a method of biasing MCTS with such a player model to create Spades playing agents that emulate human play whilst maintaining strong, competitive performance. The methods of player modelling and biasing MCTS presented in this study are applied to the commercial codebase of AI Factory Spades, and are transferable to MCTS implementations for discrete-action games where relevant gameplay data is available
How to advance general game playing artificial intelligence by player modelling
7 pagesGeneral game playing artificial intelligence has recently seen important advances due to the various techniques known as 'deep learning'. However the advances conceal equally important limitations in their reliance on: massive data sets; fortuitously constructed problems; and absence of any human-level complexity, including other human opponents. On the other hand, deep learning systems which do beat human champions, such as in Go, do not generalise well. The power of deep learning simultaneously exposes its weakness. Given that deep learning is mostly clever reconfigurations of well-established methods, moving beyond the state of art calls for forward-thinking visionary solutions, not just more of the same. I present the argument that general game playing artificial intelligence will require a generalised player model. This is because games are inherently human artefacts which therefore, as a class of problems, contain cases which require a human-style problem solving approach. I relate this argument to the performance of state of art general game playing agents. I then describe a concept for a formal category theoretic basis to a generalised player model. This formal model approach integrates my existing 'Behavlets' method for psychologically-derived player modelling: Cowley, B., Charles, D. (2016). Behavlets: a Method for Practical Player Modelling using Psychology-Based Player Traits and Domain Specific Features. User Modeling and User-Adapted Interaction, 26(2), 257-306.Non peer reviewe
Making Racing Fun Through Player Modeling and Track Evolution
This paper addresses the problem of automatically constructing tracks tailor-made to maximize the enjoyment of individual players in a simple car racing game. To this end, some approaches to player modeling are investigated, and a method of using evolutionary algorithms to construct racing tracks is presented. A simple player-dependent metric of entertainment is proposed and used as the fitness function when evolving tracks. We conclude that accurate player modeling poses some significant challenges, but track evolution works well given the right track representation
- …