24 research outputs found
USING COEVOLUTION IN COMPLEX DOMAINS
Genetic Algorithms is a computational model inspired by Darwin's theory of evolution. It has a broad range of applications from function optimization to solving robotic control problems. Coevolution is an extension of Genetic Algorithms in which more than one population is evolved at the same time. Coevolution can be done in two ways: cooperatively, in which populations jointly try to solve an evolutionary problem, or competitively. Coevolution has been shown to be useful in solving many problems, yet its application in complex domains still needs to be demonstrated.Robotic soccer is a complex domain that has a dynamic and noisy environment. Many Reinforcement Learning techniques have been applied to the robotic soccer domain, since it is a great test bed for many machine learning methods. However, the success of Reinforcement Learning methods has been limited due to the huge state space of the domain. Evolutionary Algorithms have also been used to tackle this domain; nevertheless, their application has been limited to a small subset of the domain, and no attempt has been shown to be successful in acting on solving the whole problem.This thesis will try to answer the question of whether coevolution can be applied successfully to complex domains. Three techniques are introduced to tackle the robotic soccer problem. First, an incremental learning algorithm is used to achieve a desirable performance of some soccer tasks. Second, a hierarchical coevolution paradigm is introduced to allow coevolution to scale up in solving the problem. Third, an orchestration mechanism is utilized to manage the learning processes
Recommended from our members
Multilayered skill learning and movement coordination for autonomous robotic agents
With advances in technology expanding the capabilities of robots, while at the same time making robots cheaper to manufacture, robots are rapidly becoming more prevalent in both industrial and domestic settings. An increase in the number of robots, and the likely subsequent decrease in the ratio of people currently trained to directly control the robots, engenders a need for robots to be able to act autonomously. Larger numbers of robots present together provide new challenges and opportunities for developing complex autonomous robot behaviors capable of multirobot collaboration and coordination.
The focus of this thesis is twofold. The first part explores applying machine learning techniques to teach simulated humanoid robots skills such as how to move or walk and manipulate objects in their environment. Learning is performed using reinforcement learning policy search methods, and layered learning methodologies are employed during the learning process in which multiple lower level skills are incrementally learned and combined with each other to develop richer higher level skills. By incrementally learning skills in layers such that new skills are learned in the presence of previously learned skills, as opposed to individually in isolation, we ensure that the learned skills will work well together and can be combined to perform complex behaviors (e.g. playing soccer). The second part of the thesis centers on developing algorithms to coordinate the movement and efforts of multiple robots working together to quickly complete tasks. These algorithms prioritize minimizing the makespan, or time for all robots to complete a task, while also attempting to avoid interference and collisions among the robots. An underlying objective of this research is to develop techniques and methodologies that allow autonomous robots to robustly interact with their environment (through skill learning) and with each other (through movement coordination) in order to perform tasks and accomplish goals asked of them.
The work in this thesis is implemented and evaluated in the RoboCup 3D simulation soccer domain, and has been a key component of the UT Austin Villa team winning the RoboCup 3D simulation league world championship six out of the past seven years.Computer Science
Multiple mobile robots - Fuzzy behavior based architecture and behavior evolution
Ph.DDOCTOR OF PHILOSOPH
Virtual Reality Games for Motor Rehabilitation
This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
Evolution of Robotic Behaviour Using Gene Expression Programming
The main objective in automatic robot controller development is to devise mechanisms
whereby robot controllers can be developed with less reliance on human developers. One
such mechanism is the use of evolutionary algorithms (EAs) to automatically develop
robot controllers and occasionally, robot morphology. This area of research is referred
to as evolutionary robotics (ER). Through the use of evolutionary techniques such as
genetic algorithms (GAs) and genetic programming (GP), ER has shown to be a promising
approach through which robust robot controllers can be developed.
The standard ER techniques use monolithic evolution to evolve robot behaviour: monolithic
evolution involves the use of one chromosome to code for an entire target behaviour.
In complex problems, monolithic evolution has been shown to suffer from bootstrap problems;
that is, a lack of improvement in fitness due to randomness in the solution set
[103, 105, 100, 90]. Thus, approaches to dividing the tasks, such that the main behaviours
emerge from the interaction of these simple tasks with the robot environment
have been devised. These techniques include the subsumption architecture in behaviour
based robotics, incremental learning and more recently the layered learning approach
[55, 103, 56, 105, 136, 95]. These new techniques enable ER to develop complex controllers
for autonomous robot. Work presented in this thesis extends the field of evolutionary robotics by introducing Gene
Expression Programming (GEP) to the ER field. GEP is a newly developed evolutionary
algorithm akin to GA and GP, which has shown great promise in optimisation problems.
The presented research shows through experimentation that the unique formulation of
GEP genes is sufficient for robot controller representation and development. The obtained
results show that GEP is a plausible technique for ER problems. Additionally, it is shown
that controllers evolved using GEP algorithm are able to adapt when introduced to new
environments.
Further, the capabilities of GEP chromosomes to code for more than one gene have been
utilised to show that GEP can be used to evolve manually sub-divided robot behaviours.
Additionally, this thesis extends the GEP algorithm by proposing two new evolutionary
techniques named multigenic GEP with Linker Evolution (mgGEP-LE) and multigenic
GEP with a Regulator Gene (mgGEP-RG). The results obtained from the proposed algorithms
show that the new techniques can be used to automatically evolve modularity
in robot behaviour. This ability to automate the process of behaviour sub-division and
optimisation in a modular chromosome is unique to the GEP formulations discussed, and
is an important advance in the development of machines that are able to evolve stratified
behavioural architectures with little human intervention
Mobile Robots
The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations