2 research outputs found

    Exploring Apprenticeship Learning for Player Modelling in Interactive Narratives

    Full text link
    In this paper we present an early Apprenticeship Learning approach to mimic the behaviour of different players in a short adaption of the interactive fiction Anchorhead. Our motivation is the need to understand and simulate player behaviour to create systems to aid the design and personalisation of Interactive Narratives (INs). INs are partially observable for the players and their goals are dynamic as a result. We used Receding Horizon IRL (RHIRL) to learn players' goals in the form of reward functions, and derive policies to imitate their behaviour. Our preliminary results suggest that RHIRL is able to learn action sequences to complete a game, and provided insights towards generating behaviour more similar to specific players.Comment: Extended Abstracts of the 2019 Annual Symposium on Computer-Human Interaction in Play (CHI Play

    Improving Goal Recognition in Interactive Narratives with Models of Narrative Discovery Events

    No full text
    Computational models of goal recognition hold considerable promise for enhancing the capabilities of drama managers and director agents for interactive narratives. The problem of goal recognition, and its more general form plan recognition, has been the subject of extensive investigation in the AI community. However, there have been relatively few empirical investigations of goal recognition models in the intelligent narrative technologies community to date, and little is known about how computational models of interactive narrative can inform goal recognition. In this paper, we investigate a novel goal recognition model based on Markov Logic Networks (MLNs) that leverages narrative discovery events to enrich its representation of narrative state. An empirical evaluation shows that the enriched model outperforms a prior state-of-the-art MLN model in terms of accuracy, convergence rate, and the point of convergence
    corecore