1,189 research outputs found
Artificial Intelligence and Systems Theory: Applied to Cooperative Robots
This paper describes an approach to the design of a population of cooperative
robots based on concepts borrowed from Systems Theory and Artificial
Intelligence. The research has been developed under the SocRob project, carried
out by the Intelligent Systems Laboratory at the Institute for Systems and
Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the
project stands both for "Society of Robots" and "Soccer Robots", the case study
where we are testing our population of robots. Designing soccer robots is a
very challenging problem, where the robots must act not only to shoot a ball
towards the goal, but also to detect and avoid static (walls, stopped robots)
and dynamic (moving robots) obstacles. Furthermore, they must cooperate to
defeat an opposing team. Our past and current research in soccer robotics
includes cooperative sensor fusion for world modeling, object recognition and
tracking, robot navigation, multi-robot distributed task planning and
coordination, including cooperative reinforcement learning in cooperative and
adversarial environments, and behavior-based architectures for real time task
execution of cooperating robot teams
RAFCON: a Graphical Tool for Task Programming and Mission Control
There are many application fields for robotic systems including service
robotics, search and rescue missions, industry and space robotics. As the
scenarios in these areas grow more and more complex, there is a high demand for
powerful tools to efficiently program heterogeneous robotic systems. Therefore,
we created RAFCON, a graphical tool to develop robotic tasks and to be used for
mission control by remotely monitoring the execution of the tasks. To define
the tasks, we use state machines which support hierarchies and concurrency.
Together with a library concept, even complex scenarios can be handled
gracefully. RAFCON supports sophisticated debugging functionality and tightly
integrates error handling and recovery mechanisms. A GUI with a powerful state
machine editor makes intuitive, visual programming and fast prototyping
possible. We demonstrated the capabilities of our tool in the SpaceBotCamp
national robotic competition, in which our mobile robot solved all exploration
and assembly challenges fully autonomously. It is therefore also a promising
tool for various RoboCup leagues.Comment: 8 pages, 5 figure
Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments
Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing (PHC) and fuzzy state aggregation (FSA) function approximation is tested in two stochastic environments; Tileworld and the robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning lone. Results from the RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing
- …