124 research outputs found
Deep imitation learning with memory for robocup soccer simulation.
Imitation learning is a field that is rapidly gaining attention due to its relevance to many autonomous agent applications. Providing demonstrations of effective behaviour to teach the agent is useful in real world challenges such as sparse rewards and dynamic environments. However, most imitation learning approaches don't retain a memory of previous actions and treat the demonstrations as independent and identically distributed samples. This neglects the temporal dependency between low-level actions that are performed in sequence to achieve the desired behaviour. This paper proposes an imitation learning method to learn sequences of actions by utilizing memory in deep neural networks. Long short-term memory networks are utilized to capture the temporal dependencies in a teacher's demonstrations. This way, past states and actions provide context for performing following actions. The network is trained using raw low-level features and directly maps the input to low-level parametrized actions in real-time. This minimizes the need for task specific knowledge to be manually employed in the learning process compared to related approaches. The proposed methods are evaluated on a benchmark soccer simulator and compared to supervised learning and data-aggregation approaches. The results show that utilizing memory while learning significantly improves the performance and generalization of the agent and can provide a stationary policy than can produce robust predictions at any point in the sequence
Deep learning based approaches for imitation learning.
Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable
Coordinated Multi-Agent Imitation Learning
We study the problem of imitation learning from demonstrations of multiple
coordinating agents. One key challenge in this setting is that learning a good
model of coordination can be difficult, since coordination is often implicit in
the demonstrations and must be inferred as a latent variable. We propose a
joint approach that simultaneously learns a latent coordination model along
with the individual policies. In particular, our method integrates unsupervised
structure learning with conventional imitation learning. We illustrate the
power of our approach on a difficult problem of learning multiple policies for
fine-grained behavior modeling in team sports, where different players occupy
different roles in the coordinated team strategy. We show that having a
coordination model to infer the roles of players yields substantially improved
imitation loss compared to conventional baselines.Comment: International Conference on Machine Learning 201
Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations
Modeling of real-world biological multi-agents is a fundamental problem in
various scientific and engineering fields. Reinforcement learning (RL) is a
powerful framework to generate flexible and diverse behaviors in cyberspace;
however, when modeling real-world biological multi-agents, there is a domain
gap between behaviors in the source (i.e., real-world data) and the target
(i.e., cyberspace for RL), and the source environment parameters are usually
unknown. In this paper, we propose a method for adaptive action supervision in
RL from real-world demonstrations in multi-agent scenarios. We adopt an
approach that combines RL and supervised learning by selecting actions of
demonstrations in RL based on the minimum distance of dynamic time warping for
utilizing the information of the unknown source dynamics. This approach can be
easily applied to many existing neural network architectures and provide us
with an RL model balanced between reproducibility as imitation and
generalization ability to obtain rewards in cyberspace. In the experiments,
using chase-and-escape and football tasks with the different dynamics between
the unknown source and target environments, we show that our approach achieved
a balance between the reproducibility and the generalization ability compared
with the baselines. In particular, we used the tracking data of professional
football players as expert demonstrations in football and show successful
performances despite the larger gap between behaviors in the source and target
environments than the chase-and-escape task.Comment: 14 pages, 5 figure
Scaling multi-agent reinforcement learning to eleven aside simulated robot soccer
Electrical and Electronic Engineerin
FC Portugal - High-Level Skills Within A Multi-Agent Environment
Ao longo dos anos a RoboCup, uma competição internacional de robótica e da inteligência artificia, foi palco de muitos desenvolvimentos e melhorias nestes duas áreas científicas. Esta competição tem diferentes desafios, incluindo uma liga de simulação 3D (Simulation 3D League). Anualmente, ocorre um torneio de jogos de futebol simulados entre as várias equipas participantes na Simulation 3D League, todas estas equipas deveram ser compostas por 11 robôs humanoides. Esta simulação obedece às leis da física de modo a se aproximar das circunstâncias dos jogos reais. Além disso, as regras da competição são semelhantes às regras originais do futebol com algumas alterações e adaptações. A equipa portuguesa, o FC Portugal 3D é um participante assíduo nos torneios desta liga e chegou até a ser vitoriosa várias vezes nos últimos anos, no entanto, para participar nesta competição é necessário que as equipas tenham os seus agentes capazes de executar skills (ou habilidades) de baixo nível como andar, chutar e levantar-se. O bom registo
da equipa FC Portugal 3D advém do facto de os métodos utilizados para treinar os seus jogadores serem continuamente melhorados resultando em melhores habilidades. De facto, considera-se que estes comportamentos de baixo nível estão num ponto em que é possível mudar o foco das implementações para competências de alto nível que deveram ser baseadas nestas competências fundamentais de baixo nível.
O futebol pode ser visto como um jogo cooperativo onde jogadores da mesma equipa têm de trabalhar em conjunto para vencer os seus adversários, consequentemente, este jogo é considerado como um bom ambiente para desenvolver, testar e aplicar implementações relativas a cooperações multi-agente. Com isto em mente, o objetivo desta dissertação é construir uma setplay multi-agente baseada nas skills de baixo nível previamente implementadas pela FC Portugal para serem usadas em situações de jogo específicas em que a intenção principal é marcar um golo. Recentemente, muitos participantes da 3D League (incluindo a equipa portuguesa) têm desenvolvido competências utilizando métodos de Deep Reinforcement Learning obtendo resultados satisfatórios num tempo razoável. A abordagem adotada neste projeto foi a de utilizar o algoritmo de Reinforcement Learning, PPO, para treinar todos os ambientes criados com o intuito de desenvolver a setplay pretendida, os resultados dos treinos estão presentes no penúltimo capítulo deste documento seguidos de sugestões para implementações futuras.Throughout the years the RoboCup, an international competition of robotics and artificial intelligence, saw many developments and improvements in these scientific fields. This competition has different types of challenges including a 3D Simulation League that has an annual tournament of simulated soccer games played between several teams each composed of 11 simulated humanoid robots. The simulation obeys the laws of physics in order to approximate the games as much as possible to real circumstances, in addition, the rules are similar to the original soccer rules with
a few alterations and adaptations. The Portuguese team, FC Portugal 3D has been an assiduous participant in this league tournaments and was even victorious several times in the past years, nonetheless, to participate in this competition is necessary for teams to have their agents able to execute low-level skills such as walk, kick and get up. The good record of the FC Portugal 3D team comes from the fact that the methods used to train the robots keep being improved, resulting in better skills. As a manner of fact, it is considered that these low-level behaviors are at a point that is possible to shift the implementations' focus to high-level skills based on these fundamental low-level skills.
Soccer can be seen as a cooperative game where players from the same team have to work together to beat their opponents, consequently, this game is considered to be a good environment to develop, test, and apply cooperative multi-agent implementations. With this in mind, the objective of this dissertation is to construct a multi-agent setplay based on FC Portugal's low-level skills to be used in certain game situations where the main intent is to score a goal. Recently, many 3D League participants (including the Portuguese team) have been developing skills using Deep
Learning methods and obtaining successful results in a reasonable time. The approach taken on this project was to use the Reinforcement Learning algorithm PPO to train all the environments that were created to develop the intended setplay, the results of the training are present in the second-to-last chapter of this document followed by suggestions for future implementations
Recommended from our members
Hypernetworks Analysis of RoboCup Interactions
Robotic soccer simulations are controlled environments in which the rich variety of interactions among agents make them good candidates to be studied as complex adaptive systems. The challenge is to create an autonomous team of soccer agents that can adapt and improve its behaviour as it plays other teams. By analogy with chess, the movements of the soccer agents and the ball form ever-changing networks as players in one team form structures that give their team an advantage. For example, the Defender’s Dilemma involves relationships between an attacker with the ball, a team-mate and a defender. The defender must choose between tackling the player with the ball, or taking a position to intercept a pass to the other attacker. Since these structures involve more that two interacting entities it is necessary to go beyond networks to multidimensional hypernetworks. In this context, this thesis investigates (i) is it possible to identify patterns of play, that lead a team to obtain an advantage ?, (ii) is it possible to forecast with a good degree of accuracy if a certain game action or sequence of game actions is going to be successful, before it has been completed ?, and (iii) is it possible to make behavioural patterns emerge in the game without specifying the behavioural rules in detail ? To investigate these research questions we devised two methods to analyse the interactions between robotic players, one based on traditional programming and one based on Deep Learning. The first method identified thousands of Defender’s Dilemma configurations from RoboCup 2D simulator games and found a statistically significant association between winning and the creation of the defender’s dilemma by the attackers of the winning team. The second method showed that a feedforward Artificial Neural Network trained on thousands of games can take as input the current game configuration and forecast to a high degree of accuracy if the current action will end up in a goal or not. Finally, we designed our own fast and simple robotic soccer simulator for investigating Reinforcement Learning. This showed that Reinforcement Learning using Proximal Policy Optimization could train two agents in the task of scoring a goal, using only basic actions without using pre-built hand-programmed skills. These experiments provide evidence that it is possible: to identify advantageous patterns of play; to forecast if an action or sequence of actions will be successful; and to make behavioural patterns emerge in the game without specifying the behavioural rules in detail
- …