37 research outputs found

    Predicting opponent actions by bbservation

    Get PDF
    In competitive domains, the knowledge about the opponent can give players a clear advantage. This idea lead us in the past to propose an approach to acquire models of opponents, based only on the observation of their input-output behavior. If opponent outputs could be accessed directly, a model can be constructed by feeding a machine learning method with traces of the opponent. However, that is not the case in the Robocup domain. To overcome this problem, in this paper we present a three phases approach to model low-level behavior of individual opponent agents. First, we build a classifier to label opponent actions based on observation. Second, our agent observes an opponent and labels its actions using the previous classifier. From these observations, a model is constructed to predict the opponent actions. Finally, the agent uses the model to anticipate opponent reactions. In this paper, we have presented a proof-of-principle of our approach, termed OMBO (Opponent Modeling Based on Observation), so that a striker agent can anticipate a goalie. Results show that scores are significantly higher using the acquired opponentrsquos model of actions.Publicad

    Situation based strategic positioning for coordinating a team of homogeneous agents

    Get PDF
    . In this paper we are proposing an approach for coordinating a team ofhomogeneous agents based on a flexible common Team Strategy as well as onthe concepts of Situation Based Strategic Positioning and Dynamic Positioningand Role Exchange. We also introduce an Agent Architecture including a specifichigh-level decision module capable of implementing this strategy. Ourproposal is based on the formalization of what is a team strategy for competingwith an opponent team having opposite goals. A team strategy is composed of aset of agent types and a set of tactics, which are also composed of several formations.Formations are used for different situations and assign each agent a defaultspatial positioning and an agent type (defining its behaviour at several levels).Agents reactivity is also introduced for appropriate response to the dynamicsof the current situation. However, in our approach this is done in a way thatpreserves team coherence instead of permitting uncoordinated agent behaviour.We have applied, with success, this coordination approach to the RoboSoccersimulated domain. The FC Portugal team, developed using this approach wonthe RoboCup2000 (simulation league) European and World championshipsscoring a total of 180 goals and conceding none

    Gliders2d: Source Code Base for RoboCup 2D Soccer Simulation League

    Full text link
    We describe Gliders2d, a base code release for Gliders, a soccer simulation team which won the RoboCup Soccer 2D Simulation League in 2016. We trace six evolutionary steps, each of which is encapsulated in a sequential change of the released code, from v1.1 to v1.6, starting from agent2d-3.1.1 (set as the baseline v1.0). These changes improve performance by adjusting the agents' stamina management, their pressing behaviour and the action-selection mechanism, as well as their positional choice in both attack and defense, and enabling riskier passes. The resultant behaviour, which is sufficiently generic to be applicable to physical robot teams, increases the players' mobility and achieves a better control of the field. The last presented version, Gliders2d-v1.6, approaches the strength of Gliders2013, and outperforms agent2d-3.1.1 by four goals per game on average. The sequential improvements demonstrate how the methodology of human-based evolutionary computation can markedly boost the overall performance with even a small number of controlled steps.Comment: 12 pages, 1 figure, Gliders2d code releas

    Hierarchical control in robot soccer using robotic multi-agents

    Get PDF
    RobotCup is an international competition designed to promote Artificial Intelligence (AI) and intelligent robotic research through a standard problem: a soccer game where a wide range of technologies can be integrated [12]. This article shows, in a general way, an architecture proposed for controlling a robot soccer team. The team has been designed with agent concept for robot control in Middle League Simurosot category (FIRA). A brief description of control’s architecture is presented. In addition, this paper shows a simple robotic - agent control without an explicit communication of actions to agents.Eje: Teoría (TEOR)Red de Universidades con Carreras en Informática (RedUNCI

    OMBO: An opponent modeling approach

    Get PDF
    In competitive domains, some knowledge about the opponent can give players a clear advantage. This idea led many people to propose approaches that automatically acquire models of opponents, based only on the observation of their input–output behavior. If opponent outputs could be accessed directly, a model can be constructed by feeding a machine learning method with traces of the behavior of the opponent. However, that is not the case in the RoboCup domain where an agent does not have direct access to the opponent inputs and outputs. Rather, the agent sees the opponent behavior from its own point of view and inputs and outputs (actions) have to be inferred from observation. In this paper, we present an approach to model low-level behavior of individual opponent agents. First, we build a classifier to infer and label opponent actions based on observation. Second, our agent observes an opponent and labels its actions using the previous classifier. From these observations, machine learning techniques generate a model that predicts the opponent actions. Finally, the agent uses the model to anticipate opponent actions. In order to test our ideas, we have created an architecture called OMBO (Opponent Modeling Based on Observation). Using OMBO, a striker agent can anticipate goalie actions. Results show that in this striker-goalie scenario, scores are significantly higher using the acquired opponent's model of actions.This work has been partially supported by the Spanish MCyT under projects TRA2007-67374- C02-02 and TIN-2005-08818-C04.Also, it has been supported under MEC grant by TIN2005-08945- C06-05. We thank anonymous reviewers for their helpful comments.Publicad

    Correcting and improving imitation models of humans for Robosoccer agents

    Get PDF
    Proceeding of: 2005 IEEE Congress on Evolutionary Computation (CEC'05),Edimburgo, 2-5 Sept. 2005The Robosoccer simulator is a challenging environment, where a human introduces a team of agents into a football virtual environment. Typically, agents are programmed by hand, but it would be a great advantage to transfer human experience into football agents. The first aim of this paper is to use machine learning techniques to obtain models of humans playing Robosoccer. These models can be used later to control a Robosoccer agent. However, models did not play as smoothly and optimally as the human. To solve this problem, the second goal of this paper is to incrementally correct models by means of evolutionary techniques, and to adapt them against more difficult opponents than the ones beatable by the human.Publicad

    Pyrus Base: An Open Source Python Framework for the RoboCup 2D Soccer Simulation

    Full text link
    Soccer, also known as football in some parts of the world, involves two teams of eleven players whose objective is to score more goals than the opposing team. To simulate this game and attract scientists from all over the world to conduct research and participate in an annual computer-based soccer world cup, Soccer Simulation 2D (SS2D) was one of the leagues initiated in the RoboCup competition. In every SS2D game, two teams of 11 players and one coach connect to the RoboCup Soccer Simulation Server and compete against each other. Over the past few years, several C++ base codes have been employed to control agents' behavior and their communication with the server. Although C++ base codes have laid the foundation for the SS2D, developing them requires an advanced level of C++ programming. C++ language complexity is a limiting disadvantage of C++ base codes for all users, especially for beginners. To conquer the challenges of C++ base codes and provide a powerful baseline for developing machine learning concepts, we introduce Pyrus, the first Python base code for SS2D. Pyrus is developed to encourage researchers to efficiently develop their ideas and integrate machine learning algorithms into their teams. Pyrus base is open-source code, and it is publicly available under MIT License on GitHu

    Desarrollo de un equipo de fútbol de robots

    Get PDF
    El propósito de este artículo es mostrar una primer experiencia en la creación de un equipo de fútbol de robots. Se describe el funcionamiento del equipo INCASoT, diseñado para su presentación en la competencia CAFR-2003 (UBA), con una estrategia de control de robots (agentes) basada en una máquina de estados finitos. Esta máquina de estados especifica cómo un agente mantiene su posición, pasa la pelota y evade obstáculos. Los robots son organizados en formaciones con roles específicos de juego. INCASoT constituye la visión de un equipo básico de fútbol de robots.Eje: Inteligencia artificialRed de Universidades con Carreras en Informática (RedUNCI

    Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments

    Get PDF
    Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing (PHC) and fuzzy state aggregation (FSA) function approximation is tested in two stochastic environments; Tileworld and the robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning lone. Results from the RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing

    Desarrollo de un equipo de fútbol de robots

    Get PDF
    El propósito de este artículo es mostrar una primer experiencia en la creación de un equipo de fútbol de robots. Se describe el funcionamiento del equipo INCASoT, diseñado para su presentación en la competencia CAFR-2003 (UBA), con una estrategia de control de robots (agentes) basada en una máquina de estados finitos. Esta máquina de estados especifica cómo un agente mantiene su posición, pasa la pelota y evade obstáculos. Los robots son organizados en formaciones con roles específicos de juego. INCASoT constituye la visión de un equipo básico de fútbol de robots.Eje: Inteligencia artificialRed de Universidades con Carreras en Informática (RedUNCI
    corecore