6,891 research outputs found

    microPhantom: Playing microRTS under uncertainty and chaos

    Full text link
    This competition paper presents microPhantom, a bot playing microRTS and participating in the 2020 microRTS AI competition. microPhantom is based on our previous bot POAdaptive which won the partially observable track of the 2018 and 2019 microRTS AI competitions. In this paper, we focus on decision-making under uncertainty, by tackling the Unit Production Problem with a method based on a combination of Constraint Programming and decision theory. We show that using our method to decide which units to train improves significantly the win rate against the second-best microRTS bot from the partially observable track. We also show that our method is resilient in chaotic environments, with a very small loss of efficiency only. To allow replicability and to facilitate further research, the source code of microPhantom is available, as well as the Constraint Programming toolkit it uses

    Tactical AI in Real Time Strategy Games

    Get PDF
    The real time strategy (RTS) tactical decision making problem is a difficult problem. It is generally more complex due to its high degree of time sensitivity. This research effort presents a novel approach to this problem within an educational, teaching objective. Particular decision focus is target selection for a artificial intelligence (AI) RTS game model. The use of multi-objective evolutionary algorithms (MOEAs) in this tactical decision making problem allows an AI agent to make fast, effective solutions that do not require modification to the current environment. This approach allows for the creation of a generic solution building tool that is capable of performing well against scripted opponents without requiring expert training or deep tree searches. The experimental results validate that MOEAs can control an on-line agent capable of out performing a variety AI RTS opponent test scripts

    Air-Combat Strategy Using Approximate Dynamic Programming

    Get PDF
    Unmanned Aircraft Systems (UAS) have the potential to perform many of the dangerous missions currently own by manned aircraft. Yet, the complexity of some tasks, such as air combat, have precluded UAS from successfully carrying out these missions autonomously. This paper presents a formulation of a level flight, fixed velocity, one-on-one air combat maneuvering problem and an approximate dynamic programming (ADP) approach for computing an efficient approximation of the optimal policy. In the version of the problem formulation considered, the aircraft learning the optimal policy is given a slight performance advantage. This ADP approach provides a fast response to a rapidly changing tactical situation, long planning horizons, and good performance without explicit coding of air combat tactics. The method's success is due to extensive feature development, reward shaping and trajectory sampling. An accompanying fast and e ffective rollout-based policy extraction method is used to accomplish on-line implementation. Simulation results are provided that demonstrate the robustness of the method against an opponent beginning from both off ensive and defensive situations. Flight results are also presented using micro-UAS own at MIT's Real-time indoor Autonomous Vehicle test ENvironment (RAVEN).Defense University Research Instrumentation Program (U.S.) (grant number FA9550-07-1-0321)United States. Air Force Office of Scientific Research (AFOSR # FA9550-08-1-0086)American Society for Engineering Education (National Defense Science and Engineering Graduate Fellowship

    Evolving Effective Micro Behaviors for Real-Time Strategy Games

    Get PDF
    Real-Time Strategy games have become a new frontier of artificial intelligence research. Advances in real-time strategy game AI, like with chess and checkers before, will significantly advance the state of the art in AI research. This thesis aims to investigate using heuristic search algorithms to generate effective micro behaviors in combat scenarios for real-time strategy games. Macro and micro management are two key aspects of real-time strategy games. While good macro helps a player collect more resources and build more units, good micro helps a player win skirmishes against equal numbers of opponent units or win even when outnumbered. In this research, we use influence maps and potential fields as a basis representation to evolve micro behaviors. We first compare genetic algorithms against two types of hill climbers for generating competitive unit micro management. Second, we investigated the use of case-injected genetic algorithms to quickly and reliably generate high quality micro behaviors. Then we compactly encoded micro behaviors including influence maps, potential fields, and reactive control into fourteen parameters and used genetic algorithms to search for a complete micro bot, ECSLBot. We compare the performance of our ECSLBot with two state of the art bots, UAlbertaBot and Nova, on several skirmish scenarios in a popular real-time strategy game StarCraft. The results show that the ECSLBot tuned by genetic algorithms outperforms UAlbertaBot and Nova in kiting efficiency, target selection, and fleeing. In addition, the same approach works to create competitive micro behaviors in another game SeaCraft. Using parallelized genetic algorithms to evolve parameters in SeaCraft we are able to speed up the evolutionary process from twenty one hours to nine minutes. We believe this work provides evidence that genetic algorithms and our representation may be a viable approach to creating effective micro behaviors for winning skirmishes in real-time strategy games
    • …
    corecore