8 research outputs found

    Optimization of rules selection for robot soccer strategies

    Get PDF
    Mobile embedded systems belong among the typical applications of distributed systems control in realtime. An example of a mobile control system is a robotic system. The proposal and realization of such a distributed control system represents a demanding and complex task for real-time control. In the process of robot soccer game applications, extensive data is accumulated. The reduction of such data is a possible win in a game strategy. The main topic of this article is a description of an efficient method for rule selection from a strategy. The proposed algorithm is based on the geometric representation of rules. A described problem and a proposed solution can be applied to other areas dealing with effective searching of rules in structures that also represent coordinates of the real world. Because this construed strategy describes a real space and the stores physical coordinates of real objects, our method can be used in strategic planning in the real world where we know the geographical positions of objects.Web of Science11art. no. 1

    Study on Optimizing of Ball Passing Strategy and Role Switching Mechanism for Robot Soccer

    Get PDF
    Abstract A new ball passing strategy for robot soccer is proposed in this paper. With introduce of a new algorithm on ball passing, the optimum strategy is confirmed to be more efficient and exact when passing a ball. Questions of role switching in multi-intelligent agent cooperation in robot soccer are described based on Generalized Stochastic Petri-Net (GSPN). Results of computer simulation have confirmed the feasibility and efficiency of above Petri-net method

    Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments

    Get PDF
    Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing (PHC) and fuzzy state aggregation (FSA) function approximation is tested in two stochastic environments; Tileworld and the robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning lone. Results from the RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing

    Abstracting Multidimensional Concepts for Multilevel Decision Making in Multirobot Systems

    Get PDF
    Multirobot control architectures often require robotic tasks to be well defined before allocation. In complex missions, it is often difficult to decompose an objective into a set of well defined tasks; human operators generate a simplified representation based on experience and estimation. The result is a set of robot roles, which are not best suited to accomplishing those objectives. This thesis presents an alternative approach to generating multirobot control algorithms using task abstraction. By carefully analysing data recorded from similar systems a multidimensional and multilevel representation of the mission can be abstracted, which can be subsequently converted into a robotic controller. This work, which focuses on the control of a team of robots to play the complex game of football, is divided into three sections: In the first section we investigate the use of spatial structures in team games. Experimental results show that cooperative teams beat groups of individuals when competing for space and that controlling space is important in the game of robot football. In the second section, we generate a multilevel representation of robot football based on spatial structures measured in recorded matches. By differentiating between spatial configurations appearing in desirable and undesirable situations, we can abstract a strategy composed of the more desirable structures. In the third section, five partial strategies are generated, based on the abstracted structures, and a suitable controller is devised. A set of experiments shows the success of the method in reproducing those key structures in a multirobot system. Finally, we compile our methods into a formal architecture for task abstraction and control. The thesis concludes that generating multirobot control algorithms using task abstraction is appropriate for problems which are complex, weakly-defined, multilevel, dynamic, competitive, unpredictable, and which display emergent properties
    corecore