877 research outputs found

    Diversity-based Deep Reinforcement Learning Towards Multidimensional Difficulty for Fighting Game AI

    Full text link
    In fighting games, individual players of the same skill level often exhibit distinct strategies from one another through their gameplay. Despite this, the majority of AI agents for fighting games have only a single strategy for each "level" of difficulty. To make AI opponents more human-like, we'd ideally like to see multiple different strategies at each level of difficulty, a concept we refer to as "multidimensional" difficulty. In this paper, we introduce a diversity-based deep reinforcement learning approach for generating a set of agents of similar difficulty that utilize diverse strategies. We find this approach outperforms a baseline trained with specialized, human-authored reward functions in both diversity and performance.Comment: 8 pages, 2 figures, Experimental AI in Games 202

    Artificial intelligence in co-operative games with partial observability

    Get PDF
    This thesis investigates Artificial Intelligence in co-operative games that feature Partial Observability. Most video games feature a combination of both co-operation, as well as Partial Observability. Co-operative games are games that feature a team of at least two agents, that must achieve a shared goal of some kind. Partial Observability is the restriction of how much of an environment that an agent can observe. The research performed in this thesis examines the challenge of creating Artificial Intelligence for co-operative games that feature Partial Observability. The main contributions are that Monte-Carlo Tree Search outperforms Genetic Algorithm based agents in solving co-operative problems without communication, the creation of a co-operative Partial Observability competition promoting Artificial Intelligence research as well as an investigation of the effect of varying Partial Observability to Artificial Intelligence, and finally the creation of a high performing Monte-Carlo Tree Search agent for the game Hanabi that uses agent modelling to rationalise about other players

    Application of Retrograde Analysis to Fighting Games

    Get PDF
    With the advent of the fighting game AI competition, there has been recent interest in two-player fighting games. Monte-Carlo Tree-Search approaches currently dominate the competition, but it is unclear if this is the best approach for all fighting games. In this thesis we study the design of two-player fighting games and the consequences of the game design on the types of AI that should be used for playing the game, as well as formally define the state space that fighting games are based on. Additionally, we also characterize how AI can solve the game given a simultaneous action game model, to understand the characteristics of the solved AI and the impact it has on game design

    A GA-guided Trial-based Heuristic Tree Search Approach for Multi-Agent Package Delivery Planning

    Get PDF
    A multitude of planning and scheduling applications have to face constrained time deadlines while proposing appropriate policy solutions under uncertainty. An example of that, is the last mile delivery problem, in which a large fleet of drones needs to be managed in a broad urban area to efficiently deliver packages in response of immediate known requests and future likely requests. This application case can be seen as a sequential decision-making problem under uncertainty asking for a good solution in a constrained time deadline. In this context, this work proposes to approach a delivery policy using a combination of the Trial-based Heuristic Tree Search (THTS) and a Genetic Algorithm (GA). Specifically, during policy search trials, the GA is used as a meta-heuristic function inside the THTS paradigm to suggest the most cost-promising actions concerning drone-request immediate allocation given the current set of requests. Then, the THTS algorithm exploits the GA suggested actions and likely requests arrivals to generate only relevant branches in the tree. It enables to concentrate the search around those actions, breaking-out the inherent combinatorial nature of this planning problem. To evaluate the proposed approach a full size implementation of the aforementioned structure was built for different problem sizes, and compared to a non GA-guided THTS algorithm in order to assess its execution time and expected value performance. The results suggest this simple yet effective approach is a promising venue to fast achieve sub-optimal but reasonable cost solutions

    A Multi-Objective Approach to Tactical Maneuvering Within Real Time Strategy Games

    Get PDF
    The real time strategy (RTS) environment is a strong platform for simulating complex tactical problems. The overall research goal is to develop artificial intelligence (AI) RTS planning agents for military critical decision making education. These agents should have the ability to perform at an expert level as well as to assess a players critical decision-making ability or skill-level. The nature of the time sensitivity within the RTS environment creates very complex situations. Each situation must be analyzed and orders must be given to each tactical unit before the scenario on the battlefield changes and makes the decisions no longer relevant. This particular research effort of RTS AI development focuses on constructing a unique approach for tactical unit positioning within an RTS environment. By utilizing multiobjective evolutionary algorithms (MOEAs) for finding an \optimal positioning solution, an AI agent can quickly determine an effective unit positioning solution with a fast, rapid response. The development of such an RTS AI agent goes through three distinctive phases. The first of which is mathematically describing the problem space of the tactical positioning of units within a combat scenario. Such a definition allows for the development of a generic MOEA search algorithm that is applicable to nearly every scenario. The next major phase requires the development and integration of this algorithm into the Air Force Institute of Technology RTS AI agent. Finally, the last phase involves experimenting with the positioning agent in order to determine the effectiveness and efficiency when placed against various other tactical options. Experimental results validate that controlling the position of the units within a tactical situation is an effective alternative for an RTS AI agent to win a battle

    Evaluating the Effects on Monte Carlo Tree Search of Predicting Co-operative Agent Behaviour

    Get PDF
    This thesis explores the effects of including an agent-modelling strategy into Monte-Carlo Tree Search. This is to explore how the effects of such modelling might be used to increase the performance of agents in co-operative environments such as games. The research is conducted using two applications. The first is a co-operative 2-player puzzle game in which a perfect model outperforms an agent that makes the assumption the other agent plays randomly. The second application is the partially observable co-operative card game Hanabi, in which the predictor variant is able to outperform both a standard variant of MCTS and a version that assumes a fixed-strategy for the paired agents. This thesis also investigates a technique for learning player strategies off-line based on saved game logs for use in modelling
    corecore