112 research outputs found

    IMPLEMENTATION OF A LOCALIZATION-ORIENTED HRI FOR WALKING ROBOTS IN THE ROBOCUP ENVIRONMENT

    Get PDF
    This paper presents the design and implementation of a human–robot interface capable of evaluating robot localization performance and maintaining full control of robot behaviors in the RoboCup domain. The system consists of legged robots, behavior modules, an overhead visual tracking system, and a graphic user interface. A human–robot communication framework is designed for executing cooperative and competitive processing tasks between users and robots by using object oriented and modularized software architecture, operability, and functionality. Some experimental results are presented to show the performance of the proposed system based on simulated and real-time information. </jats:p

    Multiagent reactive plan application learning in dynamic environments

    Get PDF

    Behavior Acquisition in RoboCup Middle Size League Domain

    Get PDF

    Belief-Desire-Intention in RoboCup

    Get PDF
    The Belief-Desire-Intention (BDI) model of a rational agent proposed by Bratman has strongly influenced the research of intelligent agents in Multi-Agent Systems (MAS). Jennings extended Bratman’s concept of a single rational agent into MAS in the form of joint-intention and joint-responsibility. Kitano et al. initiated RoboCup Soccer Simulation as a standard problem in MAS analogous to the Blocks World problem in traditional AI. This has motivated many researchers from various areas of studies such as machine learning, planning, and intelligent agent research. The first RoboCup team to incorporate the BDI concept is ATHumboldt98 team by Burkhard et al. In this thesis we present a novel collaborative BDI architecture modeled for RoboCup 2D Soccer Simulation called the TA09 team which is based on Bratman’s rational agent, influenced by Cohen and Levesque’s commitment, and incorporating Jennings’ joint-intention. The TA09 team features observation-based coordination, layered planning, and dynamic formation positioning

    Programming Robosoccer agents by modelling human behavior

    Get PDF
    The Robosoccer simulator is a challenging environment for artificial intelligence, where a human has to program a team of agents and introduce it into a soccer virtual environment. Most usually, Robosoccer agents are programmed by hand. In some cases, agents make use of Machine learning (ML) to adapt and predict the behavior of the opposite team, but the bulk of the agent has been preprogrammed. The main aim of this paper is to transform Robosoccer into an interactive game and let a human control a Robosoccer agent. Then ML techniques can be used to model his/her behavior from training instances generated during the play. This model will be used later to control a Robosoccer agent, thus imitating the human behavior. We have focused our research on low-level behavior, like looking for the ball, conducting the ball towards the goal, or scoring in the presence of opponent players. Results have shown that indeed, Robosoccer agents can be controlled by programs that model human play.Publicad

    FC Portugal - High-Level Skills Within A Multi-Agent Environment

    Get PDF
    Ao longo dos anos a RoboCup, uma competição internacional de robótica e da inteligência artificia, foi palco de muitos desenvolvimentos e melhorias nestes duas áreas científicas. Esta competição tem diferentes desafios, incluindo uma liga de simulação 3D (Simulation 3D League). Anualmente, ocorre um torneio de jogos de futebol simulados entre as várias equipas participantes na Simulation 3D League, todas estas equipas deveram ser compostas por 11 robôs humanoides. Esta simulação obedece às leis da física de modo a se aproximar das circunstâncias dos jogos reais. Além disso, as regras da competição são semelhantes às regras originais do futebol com algumas alterações e adaptações. A equipa portuguesa, o FC Portugal 3D é um participante assíduo nos torneios desta liga e chegou até a ser vitoriosa várias vezes nos últimos anos, no entanto, para participar nesta competição é necessário que as equipas tenham os seus agentes capazes de executar skills (ou habilidades) de baixo nível como andar, chutar e levantar-se. O bom registo da equipa FC Portugal 3D advém do facto de os métodos utilizados para treinar os seus jogadores serem continuamente melhorados resultando em melhores habilidades. De facto, considera-se que estes comportamentos de baixo nível estão num ponto em que é possível mudar o foco das implementações para competências de alto nível que deveram ser baseadas nestas competências fundamentais de baixo nível. O futebol pode ser visto como um jogo cooperativo onde jogadores da mesma equipa têm de trabalhar em conjunto para vencer os seus adversários, consequentemente, este jogo é considerado como um bom ambiente para desenvolver, testar e aplicar implementações relativas a cooperações multi-agente. Com isto em mente, o objetivo desta dissertação é construir uma setplay multi-agente baseada nas skills de baixo nível previamente implementadas pela FC Portugal para serem usadas em situações de jogo específicas em que a intenção principal é marcar um golo. Recentemente, muitos participantes da 3D League (incluindo a equipa portuguesa) têm desenvolvido competências utilizando métodos de Deep Reinforcement Learning obtendo resultados satisfatórios num tempo razoável. A abordagem adotada neste projeto foi a de utilizar o algoritmo de Reinforcement Learning, PPO, para treinar todos os ambientes criados com o intuito de desenvolver a setplay pretendida, os resultados dos treinos estão presentes no penúltimo capítulo deste documento seguidos de sugestões para implementações futuras.Throughout the years the RoboCup, an international competition of robotics and artificial intelligence, saw many developments and improvements in these scientific fields. This competition has different types of challenges including a 3D Simulation League that has an annual tournament of simulated soccer games played between several teams each composed of 11 simulated humanoid robots. The simulation obeys the laws of physics in order to approximate the games as much as possible to real circumstances, in addition, the rules are similar to the original soccer rules with a few alterations and adaptations. The Portuguese team, FC Portugal 3D has been an assiduous participant in this league tournaments and was even victorious several times in the past years, nonetheless, to participate in this competition is necessary for teams to have their agents able to execute low-level skills such as walk, kick and get up. The good record of the FC Portugal 3D team comes from the fact that the methods used to train the robots keep being improved, resulting in better skills. As a manner of fact, it is considered that these low-level behaviors are at a point that is possible to shift the implementations' focus to high-level skills based on these fundamental low-level skills. Soccer can be seen as a cooperative game where players from the same team have to work together to beat their opponents, consequently, this game is considered to be a good environment to develop, test, and apply cooperative multi-agent implementations. With this in mind, the objective of this dissertation is to construct a multi-agent setplay based on FC Portugal's low-level skills to be used in certain game situations where the main intent is to score a goal. Recently, many 3D League participants (including the Portuguese team) have been developing skills using Deep Learning methods and obtaining successful results in a reasonable time. The approach taken on this project was to use the Reinforcement Learning algorithm PPO to train all the environments that were created to develop the intended setplay, the results of the training are present in the second-to-last chapter of this document followed by suggestions for future implementations

    Rete Algorithm Applied to Robotic Soccer

    Get PDF
    This article is a first approach to the use of Rete algorithm to design a team of robotic soccer playing agents for Robocup Soccer Server. Rete algorithm is widely used to design rule-based expert systems. Robocup Soccer Server is a system that simulates 2D robotic soccer matches. The paper presents an architecture based on CM United team architecture for Robocup Soccer Server simulation system. It generalizes the low-level information received by the agent as high-level soccer concepts. This way it can take advantage of expert system design techniques

    Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments

    Get PDF
    Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing (PHC) and fuzzy state aggregation (FSA) function approximation is tested in two stochastic environments; Tileworld and the robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning lone. Results from the RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing
    • …
    corecore