1,890 research outputs found

    A simheuristic approach for evolving agent behaviour in the exploration for novel combat tactics

    Get PDF
    The automatic generation of behavioural models for intelligent agents in military simulation and experimentation remains a challenge. Genetic Algorithms are a global optimization approach which is suitable for addressing complex problems where locating the global optimum is a difficult task. Unlike traditional optimisation techniques such as hill-climbing or derivatives-based methods, Genetic Algorithms are robust for addressing highly multi-modal and discontinuous search landscapes. In this paper, we outline a simheuristic GA-based approach for automatic generation of finite state machine based behavioural models of intelligent agents, where the aim is the identification of novel combat tactics. Rather than evolving states, the proposed approach evolves a sequence of transitions. We also discuss workable starting points for the use of Genetic Algorithms for such scenarios, shedding some light on the associated design and implementation difficulties

    Coevolutionary algorithms for the optimization of strategies for red teaming applications

    Get PDF
    Red teaming (RT) is a process that assists an organization in finding vulnerabilities in a system whereby the organization itself takes on the role of an “attacker” to test the system. It is used in various domains including military operations. Traditionally, it is a manual process with some obvious weaknesses: it is expensive, time-consuming, and limited from the perspective of humans “thinking inside the box”. Automated RT is an approach that has the potential to overcome these weaknesses. In this approach both the red team (enemy forces) and blue team (friendly forces) are modelled as intelligent agents in a multi-agent system and the idea is to run many computer simulations, pitting the plan of the red team against the plan of blue team. This research project investigated techniques that can support automated red teaming by conducting a systematic study involving a genetic algorithm (GA), a basic coevolutionary algorithm and three variants of the coevolutionary algorithm. An initial pilot study involving the GA showed some limitations, as GAs only support the optimization of a single population at a time against a fixed strategy. However, in red teaming it is not sufficient to consider just one, or even a few, opponent‟s strategies as, in reality, each team needs to adjust their strategy to account for different strategies that competing teams may utilize at different points. Coevolutionary algorithms (CEAs) were identified as suitable algorithms which were capable of optimizing two teams simultaneously for red teaming. The subsequent investigation of CEAs examined their performance in addressing the characteristics of red teaming problems, such as intransitivity relationships and multimodality, before employing them to optimize two red teaming scenarios. A number of measures were used to evaluate the performance of CEAs and in terms of multimodality, this study introduced a novel n-peak problem and a new performance measure based on the Circular Earth Movers‟ Distance. Results from the investigations involving an intransitive number problem, multimodal problem and two red teaming scenarios showed that in terms of the performance measures used, there is not a single algorithm that consistently outperforms the others across the four test problems. Applications of CEAs on the red teaming scenarios showed that all four variants produced interesting evolved strategies at the end of the optimization process, as well as providing evidence of the potential of CEAs in their future application in red teaming. The developed techniques can potentially be used for red teaming in military operations or analysis for protection of critical infrastructure. The benefits include the modelling of more realistic interactions between the teams, the ability to anticipate and to counteract potentially new types of attacks as well as providing a cost effective solution

    Analysis of Key Installation Protection using Computerized Red Teaming

    Get PDF
    This paper describes the use of genetic algorithms (GAs) for computerized red teaming applications, to explore options for military plans in specific scenarios. A tool called Optimized Red Teaming (ORT) is developed and we illustrate how it may be utilized to assist the red teaming process in security organizations, such as military forces. The developed technique incorporates a genetic algorithm in conjunction with an agent-based simulation system (ABS) called MANA (Map Aware Non-uniform Automata). Both enemy forces (the red team) and friendly forces (the blue team) are modelled as intelligent agents in a multi-agent system and many computer simulations of a scenario are run, pitting the red team plan against the blue team plan. The paper contains two major sections. First, we present a description of the ORT tool, including its various components. Second, experimental results obtained using ORT on a specific military scenario known as Key Installation Protection, developed at DSO National Laboratories in Singapore, are presented. The aim of these experiments is to explore the red tactics to penetrate a fixed blue patrolling strategy

    Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games

    Get PDF
    Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordination is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective communication protocol, we introduce a Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by experienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.Comment: 10 pages, 10 figures. Previously as title: "Multiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Games", Mar 201

    Developing an Effective and Efficient Real Time Strategy Agent for Use as a Computer Generated Force

    Get PDF
    Computer Generated Forces (CGF) are used to represent units or individuals in military training and constructive simulation. The use of CGF significantly reduces the time and money required for effective training. For CGF to be effective, they must behave as a human would in the same environment. Real Time Strategy (RTS) games place players in control of a large force whose goal is to defeat the opponent. The military setting of RTS games makes them an excellent platform for the development and testing of CGF. While there has been significant research in RTS agent development, most of the developed agents are only able to exhibit good tactical behavior, lacking the ability to develop and execute overall strategies. By analyzing prior games played by an opposing agent, an RTS agent can determine the opponent\u27s strengths and weaknesses and develop a strategy which neutralizes the strengths and capitalizes on the weaknesses. It can then execute this strategy in an RTS game. This research develops such an RTS agent called the Killer Bee Artificial Intelligence (KBAI). KBAI builds a classifier for an opposing RTS agent which allows it to predict game outcomes. It then takes this classifier, uses it to generate an effective counter-strategy, and executes the tactics required for the strategy. KBAI is both effective and efficient against four high-quality scripted agents: it wins 100% of the time, and it wins quickly. When compared to native artificial intelligence, KBAI has superior performance. It exhibits strategic behavior, as well as the tactics required to execute a developed strategy

    Optimization of Airfield Parking and Fuel Asset Dispersal to Maximize Survivability and Mission Capability Level

    Get PDF
    While the US focus for the majority of the past two decades has been on combatting insurgency and promoting stability in Southwest Asia, strategic focus is beginning to shift toward concerns of conflict with a near-peer state. Such conflict brings with it the risk of ballistic missile attack on air bases. With 26 conflicts worldwide in the past 100 years including attacks on air bases, new doctrine and modeling capacity are needed to enable the Department of Defense to continue use of vulnerable bases during conflict involving ballistic missiles. Several models have been developed to date for Air Force strategic planning use, but these models have limited use on a tactical level or for civil engineer use. This thesis presents the development of a novel model capable of identifying base layout characteristics for aprons and fuel depots to maximize dispersal and minimize impact on sortie generation times during normal operations. This model is implemented using multi-objective genetic algorithms to identify solutions that provide optimal tradeoffs between competing objectives and is assessed using an application example. These capabilities are expected to assist military engineers in the layout of parking plans and fuel depots that ensure maximum resilience while providing minimal impact to the user while enabling continued sortie generation in a contested region

    Optimizing combat capabilities by modeling combat as a complex adaptive system

    Get PDF
    Procuring combat systems in the Department of Defense is a balancing act where many variables, only some under control of the department, shift simultaneously. Technology changes non-linearly, providing new opportunities and new challenges to the existing and potential force. Money available changes year over year to fit into the overall US Government budget. Numbers of employees change through political demands rather than by cost-effectiveness considerations. The intent is to provide the best mix of equipment to field the best force against an expected enemy while maintaining adequate capability against the unexpected. Confounding this desire is the inability of current simulations to dynamically model changing capabilities and the very large universe of potential combinations of equipment and tactics.;The problem can be characterized as a stochastic, mixed-integer, non-linear optimization problem. This dissertation proposes to combine an agent-based model developed to test solutions that constitute both equipment capabilities and tactics with a co-evolutionary genetic algorithm to search this hyper-dimensional solution space. In the process, the dissertation develops the theoretical underpinning for using agent-based simulations to model combat. It also provides the theoretical basis for improvement of search effectiveness by co-evolving multiple systems simultaneously, which increases exploitation of good schemata and widens exploration of new schemata. Further, it demonstrates the effectiveness of using agent-based models and co-evolution in this application confirming the theoretical results.;An open research issue is the value of increased information in a system. This dissertation uses the combination of an agent-based model with a co-evolutionary genetic algorithm to explore the value added by increasing information in a system. The result was an increased number of fit solutions, rather than an increase in the fitness of the best solutions. Formerly unfit solutions were improved by increasing the information available making them competitive with the most fit solutions whereas already fit solutions were not improved
    • …
    corecore