14 research outputs found

    On integrating Theory of Mind in context-aware negotiation agents

    Get PDF
    Theory of Mind (ToM) is the ability of an agent to represent mental states of other agents including their intentions, desires, goals, models, beliefs, how the environment makes an impact on those beliefs, and the beliefs those agents may have about the beliefs others have about themselves. Integrating arti cial ToM in automated negotiations can provide software agents a key competitive advantage. In this work, we propose integrating ToM into context-aware negotiation agents using Bayesian inference to update each agent's beliefs. Beliefs are about the necessity and risk of the opponent considering hypothesis about how it takes into account contextual variables. A systematic hierarchical approach to combine ToM with using evidence from the opponent actions in an unfolding negotiation episode is proposed. Alternative contextual scenarios are used to argue in favor of incorporating di erent levels of reasoning and modeling the strategic behavior of an opponent.Sociedad Argentina de Informática e Investigación Operativ

    ToM-Dyna-Q: on the integration of reinforcement learning and machine Theory of Mind

    Get PDF
    The capacity to understand others, or to reason about others’ ways of reasoning about others (including us), is fundamental for an agent to survive in a multi-agent uncertain environment. This reasoning ability, commonly known as Theory of Mind, is instrumental for making effective predictions over others’ future actions and learning from both real and simulated experience. In this work, a novel architecture for model-based reinforcement learning in a multi-agent setting is proposed. The proposed architecture, called ToM-Dyna-Q, integrates ToM simulation alongside with the well-known Dyna-Q architecture to account for artificial cognition in a shared environment inhabited by multiple agents interacting with each other. Results obtained for the two-player competitive game of Tic-Tac-Toe demonstrate the importance for a given agent of learning, reasoning and planning based on mental simulation modeling of other agents’ goals, beliefs and intentions.XIX Workshop Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    ToM-Dyna-Q: on the integration of reinforcement learning and machine Theory of Mind

    Get PDF
    The capacity to understand others, or to reason about others’ ways of reasoning about others (including us), is fundamental for an agent to survive in a multi-agent uncertain environment. This reasoning ability, commonly known as Theory of Mind, is instrumental for making effective predictions over others’ future actions and learning from both real and simulated experience. In this work, a novel architecture for model-based reinforcement learning in a multi-agent setting is proposed. The proposed architecture, called ToM-Dyna-Q, integrates ToM simulation alongside with the well-known Dyna-Q architecture to account for artificial cognition in a shared environment inhabited by multiple agents interacting with each other. Results obtained for the two-player competitive game of Tic-Tac-Toe demonstrate the importance for a given agent of learning, reasoning and planning based on mental simulation modeling of other agents’ goals, beliefs and intentions.XIX Workshop Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    On integrating Theory of Mind in context-aware negotiation agents

    Get PDF
    Theory of Mind (ToM) is the ability of an agent to represent mental states of other agents including their intentions, desires, goals, models, beliefs, how the environment makes an impact on those beliefs, and the beliefs those agents may have about the beliefs others have about themselves. Integrating arti cial ToM in automated negotiations can provide software agents a key competitive advantage. In this work, we propose integrating ToM into context-aware negotiation agents using Bayesian inference to update each agent's beliefs. Beliefs are about the necessity and risk of the opponent considering hypothesis about how it takes into account contextual variables. A systematic hierarchical approach to combine ToM with using evidence from the opponent actions in an unfolding negotiation episode is proposed. Alternative contextual scenarios are used to argue in favor of incorporating di erent levels of reasoning and modeling the strategic behavior of an opponent.Sociedad Argentina de Informática e Investigación Operativ

    ToM-Dyna-Q: on the integration of reinforcement learning and machine Theory of Mind

    Get PDF
    The capacity to understand others, or to reason about others’ ways of reasoning about others (including us), is fundamental for an agent to survive in a multi-agent uncertain environment. This reasoning ability, commonly known as Theory of Mind, is instrumental for making effective predictions over others’ future actions and learning from both real and simulated experience. In this work, a novel architecture for model-based reinforcement learning in a multi-agent setting is proposed. The proposed architecture, called ToM-Dyna-Q, integrates ToM simulation alongside with the well-known Dyna-Q architecture to account for artificial cognition in a shared environment inhabited by multiple agents interacting with each other. Results obtained for the two-player competitive game of Tic-Tac-Toe demonstrate the importance for a given agent of learning, reasoning and planning based on mental simulation modeling of other agents’ goals, beliefs and intentions.XIX Workshop Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    On integrating Theory of Mind in context-aware negotiation agents

    Get PDF
    Theory of Mind (ToM) is the ability of an agent to represent mental states of other agents including their intentions, desires, goals, models, beliefs, how the environment makes an impact on those beliefs, and the beliefs those agents may have about the beliefs others have about themselves. Integrating arti cial ToM in automated negotiations can provide software agents a key competitive advantage. In this work, we propose integrating ToM into context-aware negotiation agents using Bayesian inference to update each agent's beliefs. Beliefs are about the necessity and risk of the opponent considering hypothesis about how it takes into account contextual variables. A systematic hierarchical approach to combine ToM with using evidence from the opponent actions in an unfolding negotiation episode is proposed. Alternative contextual scenarios are used to argue in favor of incorporating di erent levels of reasoning and modeling the strategic behavior of an opponent.Sociedad Argentina de Informática e Investigación Operativ

    The importance of context-dependent learning in negotiation agents

    Get PDF
    Automated negotiation between arti cial agents is essential to deploy Cognitive Computing and Internet of Things. The behavior of a negotiating agent depends signi cantly on the in uence of environmental conditions or contextual variables, since they affect not only a given agent preferences and strategies, but also those of other agents. Despite this, the existing literature on automated negotiation is scarce about how to properly account for the effect of context-relevant variables in learning and evolving strategies. In this paper, a novel context-driven representation for automated negotiation is proposed. Also, a simple negotiating agent that queries available information from its environment, internally models contextual variables, and learns how to take advantage of this knowledge by playing against himself using reinforcement learning is proposed. Through a set of episodes against negotiating agents in the existing literature, it is shown that it makes no sense to negotiate without taking context-relevant variables into account. The context-aware negotiating agent has been implemented in the GENIUS negotiation environment, and results obtained are signi cant and revealing.Sociedad Argentina de Informática e Investigación Operativ

    The importance of context-dependent learning in negotiation agents

    Get PDF
    Automated negotiation between arti cial agents is essential to deploy Cognitive Computing and Internet of Things. The behavior of a negotiating agent depends signi cantly on the in uence of environmental conditions or contextual variables, since they affect not only a given agent preferences and strategies, but also those of other agents. Despite this, the existing literature on automated negotiation is scarce about how to properly account for the effect of context-relevant variables in learning and evolving strategies. In this paper, a novel context-driven representation for automated negotiation is proposed. Also, a simple negotiating agent that queries available information from its environment, internally models contextual variables, and learns how to take advantage of this knowledge by playing against himself using reinforcement learning is proposed. Through a set of episodes against negotiating agents in the existing literature, it is shown that it makes no sense to negotiate without taking context-relevant variables into account. The context-aware negotiating agent has been implemented in the GENIUS negotiation environment, and results obtained are signi cant and revealing.Sociedad Argentina de Informática e Investigación Operativ

    The importance of context-dependent learning in negotiation agents

    Get PDF
    Automated negotiation between arti cial agents is essential to deploy Cognitive Computing and Internet of Things. The behavior of a negotiating agent depends signi cantly on the in uence of environmental conditions or contextual variables, since they affect not only a given agent preferences and strategies, but also those of other agents. Despite this, the existing literature on automated negotiation is scarce about how to properly account for the effect of context-relevant variables in learning and evolving strategies. In this paper, a novel context-driven representation for automated negotiation is proposed. Also, a simple negotiating agent that queries available information from its environment, internally models contextual variables, and learns how to take advantage of this knowledge by playing against himself using reinforcement learning is proposed. Through a set of episodes against negotiating agents in the existing literature, it is shown that it makes no sense to negotiate without taking context-relevant variables into account. The context-aware negotiating agent has been implemented in the GENIUS negotiation environment, and results obtained are signi cant and revealing.Sociedad Argentina de Informática e Investigación Operativ

    The importance of context-dependent learning in negotiation agents

    Get PDF
    Automated negotiation between artificial agents is essential to deploy Cognitive Computing and Internet of Things. The behavior of a negotiation agent depends significantly on the influence of environmental conditions or contextual variables, since they affect not only a given agent preferences and strategies, but also those of other agents. Despite this, the existing literature on automated negotiation is scarce about how to properly account for the effect of context-relevant variables in learning and evolving strategies. In this paper, a novel context-driven representation for automated negotiation is introduced. Also, a simple negotiation agent that queries available information from its environment, internally models contextual variables, and learns how to take advantage of this knowledge by playing against himself using reinforcement learning is proposed. Through a set of episodes against other negotiation agents in the existing literature, it is shown using our context-aware agent that it makes no sense to negotiate without taking context-relevant variables into account. Our context-aware negotiation agent has been implemented in the GENIUS environment, and results obtained are significant and quite revealing
    corecore