4 research outputs found

    Extending BDI plan selection to incorporate learning from experience

    Get PDF
    An important drawback to the popular Belief, Desire, and Intentions (BDI) paradigm is that such systems include no element of learning from experience. We describe a novel BDI execution framework that models context conditions as decision trees, rather than boolean formulae, allowing agents to learn the probability of success for plans based on experience. By using a probabilistic plan selection function, the agents can balance exploration and exploitation of their plans. We extend earlier work to include both parameterised goals and recursion and modify our previous approach to decision tree confidence to include large and even non-finite domains that arise from such consideration. Our evaluation on a pre-existing program that relies heavily on recursion and parametrised goals confirms previous results that naive learning fails in some circumstances, and demonstrates that the improved approach learns relatively well

    Agent programming in the cognitive era

    Get PDF
    It is claimed that, in the nascent ‘Cognitive Era’, intelligent systems will be trained using machine learning techniques rather than programmed by software developers. A contrary point of view argues that machine learning has limitations, and, taken in isolation, cannot form the basis of autonomous systems capable of intelligent behaviour in complex environments. In this paper, we explore the contributions that agent-oriented programming can make to the development of future intelligent systems. We briefly review the state of the art in agent programming, focussing particularly on BDI-based agent programming languages, and discuss previous work on integrating AI techniques (including machine learning) in agent-oriented programming. We argue that the unique strengths of BDI agent languages provide an ideal framework for integrating the wide range of AI capabilities necessary for progress towards the next-generation of intelligent systems. We identify a range of possible approaches to integrating AI into a BDI agent architecture. Some of these approaches, e.g., ‘AI as a service’, exploit immediate synergies between rapidly maturing AI techniques and agent programming, while others, e.g., ‘AI embedded into agents’ raise more fundamental research questions, and we sketch a programme of research directed towards identifying the most appropriate ways of integrating AI capabilities into agent programs

    Modelo de toma de decisiones en Agentes Inteligentes, mejorando el esquema BDI

    Get PDF
    El presente Trabajo Final de Maestría parte de la relación que existe entre dos teorías: el modelo de Belief, Desires, Intentions (BDI) propuesto por Michael Bratman, y la Teoría Clásica de Racionalidad y sus debilidades caracterizadas por Jhon Searle. Estas proposiciones analizan el pensamiento humano y buscan caracterizar los orígenes de la racionalidad y la toma de decisiones desde una perspectiva filosófica y psicológica. El Modelo BDI ha sido llevado ampliamente a la implementación de sistemas informáticos inteligentes y particularmente es un modelo para agentes inteligentes computacionales. A partir de esto, este trabajo busca plantear las relaciones que existen entre el modelo BDI y la Teoría Clásica de Racionalidad, y como puede esta última complementar la arquitectura ya definida para Agentes inteligentes dentro del modelo de razonamiento y toma de decisiones que estos poseen. En la primera parte se hace una introducción a los planteamientos iniciales de la investigación, posterior a esto se evidencia el fundamento teórico de la racionalidad en Agentes Inteligentes y se realiza una presentación del estado del arte en el campo. Se realiza una propuesta frente al modelo actual denominado BDI-S y se presenta el trabajo ejecutado para ponerlo en práctica. Finalmente se presentan las conclusiones al respectoAbstract : This Project of Master Degree lies on the relationship between two theories: the model of Belief, Desires, Intentions (BDI) proposed by Michael Bratman, and the classical model of rationality and its weaknesses characterized by John Searle. These propositions analyze the human thought and seek to characterize the origins of rationality and decision making from a philosophical and psychological perspective. The BDI model has been widely taken to the implementation of intelligent computer systems and particularly is a model for intelligent agents. This work seeks to raise relations between the BDI model and the classical theory of rationality, and as the latter may complement the already defined architecture for intelligent agents within the model of reasoning and decision making which they possess. The first part is an introduction to the initial investigative approaches, after this, it shows a theoretical foundation of rationality on Intelligent Agents and realize a presentation of the state of the art in the field. A proposal is performed from the current model called BDI-S and the work performed to implement it is exhibit. Finally, conclusions are presentedMaestrí

    Learning plan selection for BDI agent systems

    Get PDF
    Belief-Desire-Intention (BDI) is a popular agent-oriented programming approach for developing robust computer programs that operate in dynamic environments. These programs contain pre-programmed abstract procedures that capture domain know-how, and work by dynamically applying these procedures, or plans, to different situations that they encounter. Agent programs built using the BDI paradigm, however, do not traditionally do learning, which becomes important if a deployed agent is to be able to adapt to changing situations over time. Our vision is to allow programming of agent systems that are capable of adjusting to ongoing changes in the environment’s dynamics in a robust and effective manner. To this end, in this thesis we develop a framework that can be used by programmers to build adaptable BDI agents that can improve plan selection over time by learning from their experiences. These learning agents can dynamically adjust their choice of which plan to select in which situation, based on a growing understanding of what works and a sense of how reliable this understanding is. This reliability is given by a perceived measure of confidence, that tries to capture how well-informed the agent’s most recent decisions were and how well it knows the most recent situations that it encountered. An important focus of this work is to make this approach practical. Our framework allows learning to be integrated into BDI programs of reasonable complexity, including those that use recursion and failure recovery mechanisms. We show the usability of the framework in two complete programs: an implementation of the Towers of Hanoi game where recursive solutions must be learnt, and a modular battery system controller where the environment dynamics changes in ways that may require many learning and relearning phases
    corecore