2,408 research outputs found

    Evolution of a supply chain management game for the trading agent competition

    Get PDF
    TAC SCM is a supply chain management game for the Trading Agent Competition (TAC). The purpose of TAC is to spur high quality research into realistic trading agent problems. We discuss TAC and TAC SCM: game and competition design, scientific impact, and lessons learnt

    Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning

    Full text link
    Learning from demonstration (LfD) is a popular technique that uses expert demonstrations to learn robot control policies. However, the difficulty in acquiring expert-quality demonstrations limits the applicability of LfD methods: real-world data collection is often costly, and the quality of the demonstrations depends greatly on the demonstrator's abilities and safety concerns. A number of works have leveraged data augmentation (DA) to inexpensively generate additional demonstration data, but most DA works generate augmented data in a random fashion and ultimately produce highly suboptimal data. In this work, we propose Guided Data Augmentation (GuDA), a human-guided DA framework that generates expert-quality augmented data. The key insight of GuDA is that while it may be difficult to demonstrate the sequence of actions required to produce expert data, a user can often easily identify when an augmented trajectory segment represents task progress. Thus, the user can impose a series of simple rules on the DA process to automatically generate augmented samples that approximate expert behavior. To extract a policy from GuDA, we use off-the-shelf offline reinforcement learning and behavior cloning algorithms. We evaluate GuDA on a physical robot soccer task as well as simulated D4RL navigation tasks, a simulated autonomous driving task, and a simulated soccer task. Empirically, we find that GuDA enables learning from a small set of potentially suboptimal demonstrations and substantially outperforms a DA strategy that samples augmented data randomly

    FC Portugal - High-Level Skills Within A Multi-Agent Environment

    Get PDF
    Ao longo dos anos a RoboCup, uma competição internacional de robótica e da inteligência artificia, foi palco de muitos desenvolvimentos e melhorias nestes duas áreas científicas. Esta competição tem diferentes desafios, incluindo uma liga de simulação 3D (Simulation 3D League). Anualmente, ocorre um torneio de jogos de futebol simulados entre as várias equipas participantes na Simulation 3D League, todas estas equipas deveram ser compostas por 11 robôs humanoides. Esta simulação obedece às leis da física de modo a se aproximar das circunstâncias dos jogos reais. Além disso, as regras da competição são semelhantes às regras originais do futebol com algumas alterações e adaptações. A equipa portuguesa, o FC Portugal 3D é um participante assíduo nos torneios desta liga e chegou até a ser vitoriosa várias vezes nos últimos anos, no entanto, para participar nesta competição é necessário que as equipas tenham os seus agentes capazes de executar skills (ou habilidades) de baixo nível como andar, chutar e levantar-se. O bom registo da equipa FC Portugal 3D advém do facto de os métodos utilizados para treinar os seus jogadores serem continuamente melhorados resultando em melhores habilidades. De facto, considera-se que estes comportamentos de baixo nível estão num ponto em que é possível mudar o foco das implementações para competências de alto nível que deveram ser baseadas nestas competências fundamentais de baixo nível. O futebol pode ser visto como um jogo cooperativo onde jogadores da mesma equipa têm de trabalhar em conjunto para vencer os seus adversários, consequentemente, este jogo é considerado como um bom ambiente para desenvolver, testar e aplicar implementações relativas a cooperações multi-agente. Com isto em mente, o objetivo desta dissertação é construir uma setplay multi-agente baseada nas skills de baixo nível previamente implementadas pela FC Portugal para serem usadas em situações de jogo específicas em que a intenção principal é marcar um golo. Recentemente, muitos participantes da 3D League (incluindo a equipa portuguesa) têm desenvolvido competências utilizando métodos de Deep Reinforcement Learning obtendo resultados satisfatórios num tempo razoável. A abordagem adotada neste projeto foi a de utilizar o algoritmo de Reinforcement Learning, PPO, para treinar todos os ambientes criados com o intuito de desenvolver a setplay pretendida, os resultados dos treinos estão presentes no penúltimo capítulo deste documento seguidos de sugestões para implementações futuras.Throughout the years the RoboCup, an international competition of robotics and artificial intelligence, saw many developments and improvements in these scientific fields. This competition has different types of challenges including a 3D Simulation League that has an annual tournament of simulated soccer games played between several teams each composed of 11 simulated humanoid robots. The simulation obeys the laws of physics in order to approximate the games as much as possible to real circumstances, in addition, the rules are similar to the original soccer rules with a few alterations and adaptations. The Portuguese team, FC Portugal 3D has been an assiduous participant in this league tournaments and was even victorious several times in the past years, nonetheless, to participate in this competition is necessary for teams to have their agents able to execute low-level skills such as walk, kick and get up. The good record of the FC Portugal 3D team comes from the fact that the methods used to train the robots keep being improved, resulting in better skills. As a manner of fact, it is considered that these low-level behaviors are at a point that is possible to shift the implementations' focus to high-level skills based on these fundamental low-level skills. Soccer can be seen as a cooperative game where players from the same team have to work together to beat their opponents, consequently, this game is considered to be a good environment to develop, test, and apply cooperative multi-agent implementations. With this in mind, the objective of this dissertation is to construct a multi-agent setplay based on FC Portugal's low-level skills to be used in certain game situations where the main intent is to score a goal. Recently, many 3D League participants (including the Portuguese team) have been developing skills using Deep Learning methods and obtaining successful results in a reasonable time. The approach taken on this project was to use the Reinforcement Learning algorithm PPO to train all the environments that were created to develop the intended setplay, the results of the training are present in the second-to-last chapter of this document followed by suggestions for future implementations

    Deep learning based approaches for imitation learning.

    Get PDF
    Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable

    Proceedings of Mathsport international 2017 conference

    Get PDF
    Proceedings of MathSport International 2017 Conference, held in the Botanical Garden of the University of Padua, June 26-28, 2017. MathSport International organizes biennial conferences dedicated to all topics where mathematics and sport meet. Topics include: performance measures, optimization of sports performance, statistics and probability models, mathematical and physical models in sports, competitive strategies, statistics and probability match outcome models, optimal tournament design and scheduling, decision support systems, analysis of rules and adjudication, econometrics in sport, analysis of sporting technologies, financial valuation in sport, e-sports (gaming), betting and sports
    corecore