2 research outputs found

    Reusing skills for first-time solution of navigation tasks in platform videogames

    Get PDF
    We consider the problem of performing real-time navigation in domains where a "god's eye view"is provided. One setting where this challenge arises is in platform videogames, occurring whenever the player wishes to reach an item or powerup on the current screen. Previous agents for these games rely on generating many low-level simulations or training runs for each fixed task. Human players, on the other hand, can solve navigation tasks at a high level by visualising sequences of abstract "skills". Based on this intuition, we introduce a novel planning approach and apply it to Infinite Mario. Despite facing randomly generated, maze-like tasks, our agent is capable of deriving complex plans in real-time, without exploiting precise knowledge of the game's code

    Learning and planning in videogames via task decomposition

    Get PDF
    Artificial intelligence (AI) methods have come a long way in tabletop games, with computer programs having now surpassed human experts in the challenging games of chess, Go and heads-up no-limit Texas hold'em. However, a significant simplifying factor in these games is that individual decisions have a relatively large impact on the state of the game. The real world, however, is granular. Human beings are continually presented with new information and are faced with making a multitude of tiny decisions every second. Viewed in these terms, feedback is often sparse, meaning that it only arrives after one has made a great number of decisions. Moreover, in many real-world problems there is a continuous range of actions to choose from, and attaining meaningful feedback from the environment often requires a strong degree of action coordination. Videogames, in which players must likewise contend with granular time scales and continuous action spaces, are in this sense a better proxy for real-world problems, and have thus become regarded by many as the new frontier in games AI. Seemingly, the way in which human players approach granular decision-making in videogames is by decomposing complex tasks into high-level subproblems, thereby allowing them to focus on the "big picture". For example, in Super Mario World, human players seem to look ahead in extended steps, such as climbing a vine or jumping over a pit, rather than planning one frame at a time. Currently though, this type of reasoning does not come easily to machines, leaving many open research problems related to task decomposition. This thesis focuses on three such problems in particular: (1) The challenge of learning subgoals autonomously, so as to lessen the issue of sparse feedback. (2) The challenge of combining discrete planning techniques with extended actions whose durations and effects on the environment are uncertain. (3) The questions of when and why it is beneficial to reason over high-level continuous control variables, such as the velocity of a player-controlled ship, rather than over the most low-level actions available. We address these problems via new algorithms and novel experimental design, demonstrating empirically that our algorithms are more efficient than strong baselines that do not leverage task decomposition, and yielding insight into the types of environment where task decomposition is likely to be beneficial
    corecore