2,722 research outputs found

    SwarmBrain: Embodied agent for real-time strategy game StarCraft II via large language models

    Full text link
    Large language models (LLMs) have recently garnered significant accomplishments in various exploratory tasks, even surpassing the performance of traditional reinforcement learning-based methods that have historically dominated the agent-based field. The purpose of this paper is to investigate the efficacy of LLMs in executing real-time strategy war tasks within the StarCraft II gaming environment. In this paper, we introduce SwarmBrain, an embodied agent leveraging LLM for real-time strategy implementation in the StarCraft II game environment. The SwarmBrain comprises two key components: 1) a Overmind Intelligence Matrix, powered by state-of-the-art LLMs, is designed to orchestrate macro-level strategies from a high-level perspective. This matrix emulates the overarching consciousness of the Zerg intelligence brain, synthesizing strategic foresight with the aim of allocating resources, directing expansion, and coordinating multi-pronged assaults. 2) a Swarm ReflexNet, which is agile counterpart to the calculated deliberation of the Overmind Intelligence Matrix. Due to the inherent latency in LLM reasoning, the Swarm ReflexNet employs a condition-response state machine framework, enabling expedited tactical responses for fundamental Zerg unit maneuvers. In the experimental setup, SwarmBrain is in control of the Zerg race in confrontation with an Computer-controlled Terran adversary. Experimental results show the capacity of SwarmBrain to conduct economic augmentation, territorial expansion, and tactical formulation, and it shows the SwarmBrain is capable of achieving victory against Computer players set at different difficulty levels

    EVALUATING ARTIFICIAL INTELLIGENCE METHODS FOR USE IN KILL CHAIN FUNCTIONS

    Get PDF
    Current naval operations require sailors to make time-critical and high-stakes decisions based on uncertain situational knowledge in dynamic operational environments. Recent tragic events have resulted in unnecessary casualties, and they represent the decision complexity involved in naval operations and specifically highlight challenges within the OODA loop (Observe, Orient, Decide, and Assess). Kill chain decisions involving the use of weapon systems are a particularly stressing category within the OODA loop—with unexpected threats that are difficult to identify with certainty, shortened decision reaction times, and lethal consequences. An effective kill chain requires the proper setup and employment of shipboard sensors; the identification and classification of unknown contacts; the analysis of contact intentions based on kinematics and intelligence; an awareness of the environment; and decision analysis and resource selection. This project explored the use of automation and artificial intelligence (AI) to improve naval kill chain decisions. The team studied naval kill chain functions and developed specific evaluation criteria for each function for determining the efficacy of specific AI methods. The team identified and studied AI methods and applied the evaluation criteria to map specific AI methods to specific kill chain functions.Civilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCaptain, United States Marine CorpsCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    Online Build-Order Optimization for Real-Time Strategy Agents Using Multi-Objective Evolutionary Algorithms

    Get PDF
    The investigation introduces a novel approach for online build-order optimization in real-time strategy (RTS) games. The goal of our research is to develop an artificial intelligence (AI) RTS planning agent for military critical decision- making education with the ability to perform at an expert human level, as well as to assess a players critical decision- making ability or skill-level. Build-order optimization is modeled as a multi-objective problem (MOP), and solutions are generated utilizing a multi-objective evolutionary algorithm (MOEA) that provides a set of good build-orders to a RTS planning agent. We de ne three research objectives: (1) Design, implement and validate a capability to determine the skill-level of a RTS player. (2) Design, implement and validate a strategic planning tool that produces near expert level build-orders which are an ordered sequence of actions a player can issue to achieve a goal, and (3) Integrate the strategic planning tool into our existing RTS agent framework and an RTS game engine. The skill-level metric we selected provides an original and needed method of evaluating a RTS players skill-level during game play. This metric is a high-level description of how quickly a player executes a strategy versus known players executing the same strategy. Our strategic planning tool combines a game simulator and an MOEA to produce a set of diverse and good build-orders for an RTS agent. Through the integration of case-base reasoning (CBR), planning goals are derived and expert build- orders are injected into a MOEA population. The MOEA then produces a diverse and approximate Pareto front that is integrated into our AI RTS agent framework. Thus, the planning tool provides an innovative online approach for strategic planning in RTS games. Experimentation via the Spring Engine Balanced Annihilation game reveals that the strategic planner is able to discover build-orders that are better than an expert scripted agent and thus achieve faster strategy execution times

    Non-Linear Monte-Carlo Search in Civilization II

    Get PDF
    This paper presents a new Monte-Carlo search algorithm for very large sequential decision-making problems. Our approach builds on the recent success of Monte-Carlo tree search algorithms, which estimate the value of states and actions from the mean outcome of random simulations. Instead of using a search tree, we apply non-linear regression, online, to estimate a state-action value function from the outcomes of random simulations. This value function generalizes between related states and actions, and can therefore provide more accurate evaluations after fewer simulations. We apply our Monte-Carlo search algorithm to the game of Civilization II, a challenging multi-agent strategy game with an enormous state space and around 102110^{21} joint actions. We approximate the value function by a neural network, augmented by linguistic knowledge that is extracted automatically from the official game manual. We show that this non-linear value function is significantly more efficient than a linear value function. Our non-linear Monte-Carlo search wins 80\% of games against the handcrafted, built-in AI for Civilization II.National Science Foundation (U.S.) (CAREER grant IIS-0448168)National Science Foundation (U.S.) (grant IIS-0835652)United States. Defense Advanced Research Projects Agency (DARPA Machine Reading Program (FA8750-09-C-0172))Microsoft Research (New Faculty Fellowship

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    Developing an Effective and Efficient Real Time Strategy Agent for Use as a Computer Generated Force

    Get PDF
    Computer Generated Forces (CGF) are used to represent units or individuals in military training and constructive simulation. The use of CGF significantly reduces the time and money required for effective training. For CGF to be effective, they must behave as a human would in the same environment. Real Time Strategy (RTS) games place players in control of a large force whose goal is to defeat the opponent. The military setting of RTS games makes them an excellent platform for the development and testing of CGF. While there has been significant research in RTS agent development, most of the developed agents are only able to exhibit good tactical behavior, lacking the ability to develop and execute overall strategies. By analyzing prior games played by an opposing agent, an RTS agent can determine the opponent\u27s strengths and weaknesses and develop a strategy which neutralizes the strengths and capitalizes on the weaknesses. It can then execute this strategy in an RTS game. This research develops such an RTS agent called the Killer Bee Artificial Intelligence (KBAI). KBAI builds a classifier for an opposing RTS agent which allows it to predict game outcomes. It then takes this classifier, uses it to generate an effective counter-strategy, and executes the tactics required for the strategy. KBAI is both effective and efficient against four high-quality scripted agents: it wins 100% of the time, and it wins quickly. When compared to native artificial intelligence, KBAI has superior performance. It exhibits strategic behavior, as well as the tactics required to execute a developed strategy

    Playing Smart - Another Look at Artificial Intelligence in Computer Games

    Get PDF
    • …
    corecore