2,612 research outputs found

    Multi-Agent Pursuit-Evasion Game Based on Organizational Architecture

    Get PDF
    Multi-agent coordination mechanisms are frequently used in pursuit-evasion games with the aim of enabling the coalitions of the pursuers and unifying their individual skills to deal with the complex tasks encountered. In this paper, we propose a coalition formation algorithm based on organizational principles and applied to the pursuit-evasion problem. In order to allow the alliances of the pursuers in different pursuit groups, we have used the concepts forming an organizational modeling framework known as YAMAM (Yet Another Multi Agent Model). Specifically, we have used the concepts Agent, Role, Task, and Skill, proposed in this model to develop a coalition formation algorithm to allow the optimal task sharing. To control the pursuers\u27 path planning in the environment as well as their internal development during the pursuit, we have used a Reinforcement learning method (Q-learning). Computer simulations reflect the impact of the proposed techniques

    Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments

    Get PDF
    Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing (PHC) and fuzzy state aggregation (FSA) function approximation is tested in two stochastic environments; Tileworld and the robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning lone. Results from the RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing

    Single- and multiobjective reinforcement learning in dynamic adversarial games

    Get PDF
    This thesis uses reinforcement learning (RL) to address dynamic adversarial games in the context of air combat manoeuvring simulation. A sequential decision problem commonly encountered in the field of operations research, air combat manoeuvring simulation conventionally relied on agent programming methods that required significant domain knowledge to be manually encoded into the simulation environment. These methods are appropriate for determining the effectiveness of existing tactics in different simulated scenarios. However, in order to maximise the advantages provided by new technologies (such as autonomous aircraft), new tactics will need to be discovered. A proven technique for solving sequential decision problems, RL has the potential to discover these new tactics. This thesis explores four RL approachesā€”tabular, deep, discrete-to-deep and multiobjectiveā€” as mechanisms for discovering new behaviours in simulations of air combat manoeuvring. Itimplements and tests several methods for each approach and compares those methods in terms of the learning time, baseline and comparative performances, and implementation complexity. In addition to evaluating the utility of existing approaches to the specific task of air combat manoeuvring, this thesis proposes and investigates two novel methods, discrete-to-deep supervised policy learning (D2D-SPL) and discrete-to-deep supervised Q-value learning (D2D-SQL), which can be applied more generally. D2D-SPL and D2D-SQL offer the generalisability of deep RL at a cost closer to the tabular approach.Doctor of Philosoph

    Monte Carlo Tree Search Applied to a Modified Pursuit/Evasion Scotland Yard Game with Rendezvous Spaceflight Operation Applications

    Get PDF
    This thesis takes the Scotland Yard board game and modifies its rules to mimic important aspects of space in order to facilitate the creation of artificial intelligence for space asset pursuit/evasion scenarios. Space has become a physical warfighting domain. To combat threats, an understanding of the tactics, techniques, and procedures must be captured and studied. Games and simulations are effective tools to capture data lacking historical context. Artificial intelligence and machine learning models can use simulations to develop proper defensive and offensive tactics, techniques, and procedures capable of protecting systems against potential threats. Monte Carlo Tree Search is a bandit-based reinforcement learning model known for using limited domain knowledge to push favorable results. Monte Carlo agents have been used in a multitude of imperfect domain knowledge games. One such game was in which Monte Carlo agents were produced and studied in an imperfect domain game for pursuit-evasion tactics is Scotland Yard. This thesis continues the Monte Carlo agents previously produced by Mark Winands and Pim Nijssen and applied to Scotland Yard. In the research presented here, the rules for Scotland Yard are analyzed and presented in an expansion that partially accounts for spaceflight dynamics in order to study the agents within a simplified model, while having some foundation for use within space environments. Results show promise for the use of Monte- Carlo agents in pursuit/evasion autonomous space scenarios while also illuminating some major challenges for future work in more realistic three-dimensional space environments

    Learning-based run-time power and energy management of multi/many-core systems: current and future trends

    Get PDF
    Multi/Many-core systems are prevalent in several application domains targeting different scales of computing such as embedded and cloud computing. These systems are able to fulfil the everincreasing performance requirements by exploiting their parallel processing capabilities. However, effective power/energy management is required during system operations due to several reasons such as to increase the operational time of battery operated systems, reduce the energy cost of datacenters, and improve thermal efficiency and reliability. This article provides an extensive survey of learning-based run-time power/energy management approaches. The survey includes a taxonomy of the learning-based approaches. These approaches perform design-time and/or run-time power/energy management by employing some learning principles such as reinforcement learning. The survey also highlights the trends followed by the learning-based run-time power management approaches, their upcoming trends and open research challenges

    The Influence of Collective Working Memory Strategies on Agent Teams

    Get PDF
    Past self-organizing models of collectively moving "particles" (simulated bird flocks, fish schools, etc.) typically have been based on purely reflexive agents that have no significant memory of past movements or environmental obstacles. These agent collectives usually operate in abstract environments, but as these domains take on a greater realism, the collective requires behaviors use not only presently observed stimuli but also remembered information. It is hypothesized that the addition of a limited working memory of the environment, distributed among the collective's individuals can improve efficiency in performing tasks. This is first approached in a more traditional particle system in an abstract environment. Then it is explored for a single agent, and finally a team of agents, operating in a simulated 3-dimensional environment of greater realism. In the abstract environment, a limited distributed working memory produced a significant improvement in travel between locations, in some cases improving performance over time, while in others surprisingly achieving an immediate benefit from the influence of memory. When strategies for accumulating and manipulating memory were subsequently explored for a more realistic single agent in the 3-dimensional environment, if the agent kept a local or a cumulative working memory, its performance improved on different tasks, both when navigating nearby obstacles and, in the case of cumulative memory, when covering previously traversed terrain. When investigating a team of these agents engaged in a pursuit scenario, it was determined that a communicating and coordinating team still benefited from a working memory of the environment distributed among the agents, even with limited memory capacity. This demonstrates that a limited distributed working memory in a multi-agent system improves performance on tasks in domains of increasing complexity. This is true even though individual agents know only a fraction of the collective's entire memory, using this partial memory and interactions with others in the team to perform tasks. These results may prove useful in improving existing methodologies for control of collective movements for robotic teams, computer graphics, particle swarm optimization, and computer games, and in interpreting future experimental research on group movements in biological populations
    • ā€¦
    corecore