42 research outputs found

    Algorithms for Adaptive Game-playing Agents

    Get PDF

    A Multi-Objective Approach to Tactical Maneuvering Within Real Time Strategy Games

    Get PDF
    The real time strategy (RTS) environment is a strong platform for simulating complex tactical problems. The overall research goal is to develop artificial intelligence (AI) RTS planning agents for military critical decision making education. These agents should have the ability to perform at an expert level as well as to assess a players critical decision-making ability or skill-level. The nature of the time sensitivity within the RTS environment creates very complex situations. Each situation must be analyzed and orders must be given to each tactical unit before the scenario on the battlefield changes and makes the decisions no longer relevant. This particular research effort of RTS AI development focuses on constructing a unique approach for tactical unit positioning within an RTS environment. By utilizing multiobjective evolutionary algorithms (MOEAs) for finding an \optimal positioning solution, an AI agent can quickly determine an effective unit positioning solution with a fast, rapid response. The development of such an RTS AI agent goes through three distinctive phases. The first of which is mathematically describing the problem space of the tactical positioning of units within a combat scenario. Such a definition allows for the development of a generic MOEA search algorithm that is applicable to nearly every scenario. The next major phase requires the development and integration of this algorithm into the Air Force Institute of Technology RTS AI agent. Finally, the last phase involves experimenting with the positioning agent in order to determine the effectiveness and efficiency when placed against various other tactical options. Experimental results validate that controlling the position of the units within a tactical situation is an effective alternative for an RTS AI agent to win a battle

    Automating Game-design and Game-agent Balancing through Computational Intelligence

    Get PDF
    Game design has been a staple of human ingenuity and innovation for as long as games have been around. From sports, such as football, to applying game mechanics to the real world, such as reward schemes in shops, games have impacted the world in surprising ways. The process of developing games can, and should, be aided by automated systems, as machines have proven capable of finding innovative ways of complementing human intuition and inventiveness. When man and machine co-operate, better products are created and the world has only to benefit. This research seeks to find, test and assess methods of using genetic algorithms to human-led game balancing tasks. From tweaking difficulty to optimising pacing, to directing an intelligent agent’s behaviour, all these can benefit from an evolutionary approach and save a game designer many hours, if not days, of work based on trial and error. Furthermore, to improve the speed of any developed GAs, predictive models have been designed to aid the evolutionary process in finding better solutions faster. While these techniques could be applied on a wider variety of tasks, they have been tested almost exclusively on game balance problems. The major contributions are in defining the main challenges of game balance from an academic perspective, proposing solutions for better cooperation between the academic and the industrial side of games, as well as technical improvements to genetic algorithms applied to these tasks. Results have been positive, with success found in both academic publications and industrial cooperation

    Many-agent Reinforcement Learning

    Get PDF
    Multi-agent reinforcement learning (RL) solves the problem of how each agent should behave optimally in a stochastic environment in which multiple agents are learning simultaneously. It is an interdisciplinary domain with a long history that lies in the joint area of psychology, control theory, game theory, reinforcement learning, and deep learning. Following the remarkable success of the AlphaGO series in single-agent RL, 2019 was a booming year that witnessed significant advances in multi-agent RL techniques; impressive breakthroughs have been made on developing AIs that outperform humans on many challenging tasks, especially multi-player video games. Nonetheless, one of the key challenges of multi-agent RL techniques is the scalability; it is still non-trivial to design efficient learning algorithms that can solve tasks including far more than two agents (N≫2N \gg 2), which I name by \emph{many-agent reinforcement learning} (MARL\footnote{I use the world of ``MARL" to denote multi-agent reinforcement learning with a particular focus on the cases of many agents; otherwise, it is denoted as ``Multi-Agent RL" by default.}) problems. In this thesis, I contribute to tackling MARL problems from four aspects. Firstly, I offer a self-contained overview of multi-agent RL techniques from a game-theoretical perspective. This overview fills the research gap that most of the existing work either fails to cover the recent advances since 2010 or does not pay adequate attention to game theory, which I believe is the cornerstone to solving many-agent learning problems. Secondly, I develop a tractable policy evaluation algorithm -- αα\alpha^\alpha-Rank -- in many-agent systems. The critical advantage of αα\alpha^\alpha-Rank is that it can compute the solution concept of α\alpha-Rank tractably in multi-player general-sum games with no need to store the entire pay-off matrix. This is in contrast to classic solution concepts such as Nash equilibrium which is known to be PPADPPAD-hard in even two-player cases. αα\alpha^\alpha-Rank allows us, for the first time, to practically conduct large-scale multi-agent evaluations. Thirdly, I introduce a scalable policy learning algorithm -- mean-field MARL -- in many-agent systems. The mean-field MARL method takes advantage of the mean-field approximation from physics, and it is the first provably convergent algorithm that tries to break the curse of dimensionality for MARL tasks. With the proposed algorithm, I report the first result of solving the Ising model and multi-agent battle games through a MARL approach. Fourthly, I investigate the many-agent learning problem in open-ended meta-games (i.e., the game of a game in the policy space). Specifically, I focus on modelling the behavioural diversity in meta-games, and developing algorithms that guarantee to enlarge diversity during training. The proposed metric based on determinantal point processes serves as the first mathematically rigorous definition for diversity. Importantly, the diversity-aware learning algorithms beat the existing state-of-the-art game solvers in terms of exploitability by a large margin. On top of the algorithmic developments, I also contribute two real-world applications of MARL techniques. Specifically, I demonstrate the great potential of applying MARL to study the emergent population dynamics in nature, and model diverse and realistic interactions in autonomous driving. Both applications embody the prospect that MARL techniques could achieve huge impacts in the real physical world, outside of purely video games

    Scare Tactics

    Get PDF
    It is the purpose of this document to describe the design and development processes of Scare Tactics. The game will be discussed in further detail as it relates to several areas, such as market analysis, development process, game design, technical design, and each team members’ individual area of background research. The research areas include asymmetrical game design, level design, game engine architecture, real-time graphics, user interface design, networking and artificial intelligence. As part of the team’s market analysis, other games featuring asymmetric gameplay are discussed. The games described in this section serve as inspirations for asymmetric game design. Some of these games implement mechanics that the team seeks to emulate and expand upon in Scare Tactics. As part of the team’s development process, several concepts were prototyped over the course of two months. During that process the team adopted an Agile methodology in order to assist with scheduling, communication and resource management. Eventually, the team chose to expand upon the prototype that became the basis of Scare Tactics. Game design and technical design occur concurrently in the development of Scare Tactics. Designers conduct discussions where themes, settings, and mechanics are conceived and documented. Mechanics are prototyped in Unity and eventually ported to a proprietary engine developed by our team. Throughout the course of development, each team member has had to own an area of design or development. This has led to individual research performed in several areas, which will be discussed further in this document

    Generation and Analysis of Content for Physics-Based Video Games

    Get PDF
    The development of artificial intelligence (AI) techniques that can assist with the creation and analysis of digital content is a broad and challenging task for researchers. This topic has been most prevalent in the field of game AI research, where games are used as a testbed for solving more complex real-world problems. One of the major issues with prior AI-assisted content creation methods for games has been a lack of direct comparability to real-world environments, particularly those with realistic physical properties to consider. Creating content for such environments typically requires physics-based reasoning, which imposes many additional complications and restrictions that must be considered. Addressing and developing methods that can deal with these physical constraints, even if they are only within simulated game environments, is an important and challenging task for AI techniques that intend to be used in real-world situations. The research presented in this thesis describes several approaches to creating and analysing levels for the physics-based puzzle game Angry Birds, which features a realistic 2D environment. This research was multidisciplinary in nature and covers a wide variety of different AI fields, leading to this thesis being presented as a compilation of published work. The central part of this thesis consists of procedurally generating levels for physics-based games similar to those in Angry Birds. This predominantly involves creating and placing stable structures made up of many smaller blocks, as well as other level elements. Multiple approaches are presented, including both fully autonomous and human-AI collaborative methodologies. In addition, several analyses of Angry Birds levels were carried out using current state-of-the-art agents. A hyper-agent was developed that uses machine learning to estimate the performance of each agent in a portfolio for an unknown level, allowing it to select the one most likely to succeed. Agent performance on levels that contain deceptive or creative properties was also investigated, allowing determination of the current strengths and weaknesses of different AI techniques. The observed variability in performance across levels for different AI techniques led to the development of an adaptive level generation system, allowing for the dynamic creation of increasingly challenging levels over time based on agent performance analysis. An additional study also investigated the theoretical complexity of Angry Birds levels from a computational perspective. While this research is predominately applied to video games with physics-based simulated environments, the challenges and problems solved by the proposed methods also have significant real-world potential and applications

    Set-to-Sequence Methods in Machine Learning: A Review

    Get PDF
    Machine learning on sets towards sequential output is an important and ubiquitous task, with applications ranging from language modelling and meta-learning to multi-agent strategy games and power grid optimization. Combining elements of representation learning and structured prediction, its two primary challenges include obtaining a meaningful, permutation invariant set representation and subsequently utilizing this representation to output a complex target permutation. This paper provides a comprehensive introduction to the field as well as an overview of important machine learning methods tackling both of these key challenges, with a detailed qualitative comparison of selected model architectures.Comment: 46 pages of text, with 10 pages of references. Contains 2 tables and 4 figure
    corecore