1,455 research outputs found

    Evolutionary Robotics

    Get PDF
    info:eu-repo/semantics/publishedVersio

    Aggregate Selection in Evolutionary Robotics

    Full text link
    Can the processes of natural evolution be mimicked to create robots or autonomous agents? This question embodies the most fundamental goals of evolutionary robotics (ER). ER is a field of research that explores the use of artificial evolution and evolutionary computing for learning of control in autonomous robots, and in autonomous agents in general. In a typical ER experiment, robots, or more precisely their control systems, are evolved to perform a given task in which they must interact dynamically with their environment. Controllers compete in the environment and are selected and propagated based on their ability (or fitness) to perform the desired task. A key component of this process is the manner in which the fitness of the evolving controllers is measured. In ER, fitness is measured by a fitness function or objective function. This function applies some given criteria to determine which robots or agents are better at performing the task for which they are being evolved. Fitness functions can introduce varying levels of a priori knowledge into evolving populations. Som

    Neuroevolution in Games: State of the Art and Open Challenges

    Get PDF
    This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyse the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The article also highlights important open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table (Table 1

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Pac-Man Conquers Academia: Two Decades of Research Using a Classic Arcade Game

    Get PDF

    Neuro-Evolution for Emergent Specialization in Collective Behavior Systems

    Get PDF
    Eiben, A.E. [Promotor]Schut, M.C. [Copromotor

    Aprendizagem automática de comportamentos para futebol robótico

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaNo desenvolvimento de um agente inteligente e necess ario criar um conjunto de comportamentos, mais ou menos complexos, para que o agente possa escolher o que achar mais adequado para utilizar a cada instante. Comportamentos simples podem ser facilmente programados \ a m~ao", mas, a medida que se tentam criar comportamentos mais complexos esta tarefa pode tornar-se invi avel. Isto pode acontecer, por exemplo, em casos onde o espa co de estados, o espa co de a c~oes e/ou o tempo tomam valores cont nuos. E esse o caso no futebol rob otico, onde os rob^os se movem num espa co cont nuo, com velocidades e em tempo cont nuos. A aprendizagem por refor co permite que seja o agente a aprender um comportamento atrav es da sua experi^encia ao interagir com o mundo. Esta t ecnica baseia-se num mecanismo que ocorre na natureza, uma vez que imita a forma como os animais aprendem, mais concretamente, observando o estado do mundo, tomando uma a c~ao e observando as consequ^encias dessa a c~ao. A longo prazo, e com base nas consequ^encias das a c~oes tomadas, o animal aprende se, nessas circunst^ancias, a sequ^encia de a c~oes que o levaram a esse ponto e boa e pode ser repetida ou n~ao. Para que o agente aprenda da mesma forma, e preciso que consiga percecionar o valor que as suas a c~oes t^em a longo prazo. Para isso, e-lhe dada uma recompensa ou um castigo quando faz uma a c~ao desejada ou indesejada, respetivamente. Comportamentos aprendidos podem ser usados em situa c~oes em que e invi avel escrev^e-los a m~ao, ou para criar comportamentos com melhor desempenho uma vez que o agente consegue derivar fun c~oes complexas que descrevam melhor a solu c~ao do problema. No contexto desta tese foram desenvolvidos 3 comportamentos no contexto da equipa de futebol rob otico CAMBADA da Univeridade de Aveiro. O primeiro comportamento, o mais simples, consistiu em fazer o rob^o rodar sobre si pr oprio at e estar virado para uma dada orienta c~ao absoluta. O segundo permitia que o rob^o, com a bola na sua posse, a driblasse numa dire c~ao desejada. Por m, o terceiro comportamento permitiu que o rob^o aprendesse a ajustar a sua posi c~ao para receber uma bola que pode vir com mais ou menos velocidade e descentrada em rela c~ao ao receptor. Os resultados das compara c~oes feitas com os comportamentos desenvolvidos a m~ao que j a existiam na CAMBADA, mostram que comportamentos aprendidos conseguem ser mais e cientes e obter melhores resultados do que os explicitamente programados.While developing an intelligent agent, one needs to create a set of behaviors, more or less complex, to allow the agent to choose the one it believes to be appropriate at each instant. Simple behaviors can easily be developed by hand, but, as we try to create more complex ones, this becomes impracticable. This complexity may arise, for example, when the state space, the action space and/or the time take continuous values. This is the case of robotic soccer where the robots move in a continuous space, at continuous velocities and in continuous time. Reinforcement learning enables the agent to learn behaviors by itself by experiencing and interacting with the world. This technique is based on a mechanism which happens in nature, since it mimics the way animals learn, more precisely, observing the world state, taking an action and then observe the consequences of that action. In the long run, and based on the consequences of the actions taken, the animal learned if, in those circumstances, the sequence of actions which led it to that state is good and may be repeated or not. To make the agent learn in this way, it must understand the value of its actions in the long run. In order to do that, it is given a reward or a punishment for doing a desired or undesired action, respectively. Learned behaviors can be used in cases where they are too complex to be written by hand, or to create behaviors that can perform better than the hand-coded ones, since the agent can derive complex functions that better describe a solution for the given problem. During this thesis, 3 behaviors were developed in the context of the robotic soccer CAMBADA team from University of Aveiro. The rst behavior, the most simple, made the robot rotate about itself until it had turned to a given absolute orientation. The second one, allowed a robot that possessed the ball to dribble it in a desired direction. Lastly, the third behavior allowed the robot to learn to adjust its position to receive a ball. The ball can come at a high or low speed and may not be centered in relation to the receiver. The results of comparing the learned behaviors to the already existing handcoded ones showed that the learned behaviors were more e cient and obtained better results
    • …
    corecore