Self-motivated agents that learn

Abstract

Tese de mestrado em Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2013We propose an architecture for the creation of agents with the capacity to learn how to act autonomously, from their interactions with the environment. Predefined solutions such as manually specified behaviors, goals or rewards are avoided in order to maximize autonomous adaptation to unforeseen conditions. We use internal needs to motivate agents to act in an attempt to fulfil them. As a consequence of its interactions with the environment, agents make observations which are used to formulate hypotheses and discover the rules that govern the relationship between the agents actions and their consequences. These rules are then used as criteria in the decision making process. Thus, agents behaviors depend on previous interactions and evolve with experience. We started by proposing a single agent architecture and created simple agents defined by sensors, needs and actuators. These agents adapted autonomously to the environment by discovering behaviors which fulfilled their needs. The single agent approach did not scale well neither allowed the satisfaction of multiple needs simultaneously. In order to face these shortcomings we propose a multiagent architecture which solves the scalability problem found in the single agent approach and offers the capacity to fulfil several needs simultaneously

    Similar works