753 research outputs found

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams

    Synthesized cooperative strategies for intelligent multi-robots in a real-time distributed environment : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Albany, New Zealand

    Get PDF
    In the robot soccer domain, real-time response usually curtails the development of more complex Al-based game strategies, path-planning and team cooperation between intelligent agents. In light of this problem, distributing computationally intensive algorithms between several machines to control, coordinate and dynamically assign roles to a team of robots, and allowing them to communicate via a network gives rise to real-time cooperation in a multi-robotic team. This research presents a myriad of algorithms tested on a distributed system platform that allows for cooperating multi- agents in a dynamic environment. The test bed is an extension of a popular robot simulation system in the public domain developed at Carnegie Mellon University, known as TeamBots. A low-level real-time network game protocol using TCP/IP and UDP were incorporated to allow for a conglomeration of multi-agent to communicate and work cohesively as a team. Intelligent agents were defined to take on roles such as game coach agent, vision agent, and soccer player agents. Further, team cooperation is demonstrated by integrating a real-time fuzzy logic-based ball-passing algorithm and a fuzzy logic algorithm for path planning. Keywords Artificial Intelligence, Ball Passing, the coaching system, Collaborative, Distributed Multi-Agent, Fuzzy Logic, Role Assignmen

    Behavior Acquisition in RoboCup Middle Size League Domain

    Get PDF

    Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments

    Get PDF
    Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing (PHC) and fuzzy state aggregation (FSA) function approximation is tested in two stochastic environments; Tileworld and the robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning lone. Results from the RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing

    Proceedings of the 2nd Computer Science Student Workshop: Microsoft Istanbul, Turkey, April 9, 2011

    Get PDF
    corecore