131,207 research outputs found

    Efficient representation and effective reasoning for multi-agent systems

    Get PDF
    In multi-agent systems, interactions between agents are often related to cooperation or competition in such a fashion that they can fulfil their tasks. Successful interactions often require agents to share common and unified knowledge about their working environment. However, autonomous agents observe and judge their surroundings by their own view. Consequently, agents possibly have partial and sometimes conflicting descriptions of the world. In scenarios where they have to coordinate, they are required to identify the shared knowledge in the group and to be able to reason with available information. This problem requires more sophisticated modelling and reasoning methods, which is beyond the classical logics and monotonic reasoning. We introduce a formal framework based on Defeasible Logic (DL) to describe the knowledge commonly shared by agents, and that obtained from other agents. This enables an agent to efficiently reason about the environment and intentions of other agents given available information. We propose to extend the reasoning mechanism of DL with the superior knowledge. This mechanism allows an agent to integrate its mental attitude with a more trustworthy source of information such as the knowledge shared by the majority of other agents

    The Hanabi Challenge: A New Frontier for AI Research

    Full text link
    From the early days of computing, games have been important testbeds for studying how well machines can do sophisticated decision making. In recent years, machine learning has made dramatic advances with artificial agents reaching superhuman performance in challenge domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners. We continue this tradition by proposing the game of Hanabi as a new challenge domain with novel problems that arise from its combination of purely cooperative gameplay with two to five players and imperfect information. In particular, we argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground. We believe developing novel techniques for such theory of mind reasoning will not only be crucial for success in Hanabi, but also in broader collaborative efforts, especially those with human partners. To facilitate future research, we introduce the open-source Hanabi Learning Environment, propose an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence

    On integrating Theory of Mind in context-aware negotiation agents

    Get PDF
    Theory of Mind (ToM) is the ability of an agent to represent mental states of other agents including their intentions, desires, goals, models, beliefs, how the environment makes an impact on those beliefs, and the beliefs those agents may have about the beliefs others have about themselves. Integrating arti cial ToM in automated negotiations can provide software agents a key competitive advantage. In this work, we propose integrating ToM into context-aware negotiation agents using Bayesian inference to update each agent's beliefs. Beliefs are about the necessity and risk of the opponent considering hypothesis about how it takes into account contextual variables. A systematic hierarchical approach to combine ToM with using evidence from the opponent actions in an unfolding negotiation episode is proposed. Alternative contextual scenarios are used to argue in favor of incorporating di erent levels of reasoning and modeling the strategic behavior of an opponent.Sociedad Argentina de Informática e Investigación Operativ

    On integrating Theory of Mind in context-aware negotiation agents

    Get PDF
    Theory of Mind (ToM) is the ability of an agent to represent mental states of other agents including their intentions, desires, goals, models, beliefs, how the environment makes an impact on those beliefs, and the beliefs those agents may have about the beliefs others have about themselves. Integrating arti cial ToM in automated negotiations can provide software agents a key competitive advantage. In this work, we propose integrating ToM into context-aware negotiation agents using Bayesian inference to update each agent's beliefs. Beliefs are about the necessity and risk of the opponent considering hypothesis about how it takes into account contextual variables. A systematic hierarchical approach to combine ToM with using evidence from the opponent actions in an unfolding negotiation episode is proposed. Alternative contextual scenarios are used to argue in favor of incorporating di erent levels of reasoning and modeling the strategic behavior of an opponent.Sociedad Argentina de Informática e Investigación Operativ

    Intention Recognition With ProbLog

    Get PDF
    In many scenarios where robots or autonomous systems may be deployed, the capacity to infer and reason about the intentions of other agents can improve the performance or utility of the system. For example, a smart home or assisted living facility is better able to select assistive services to deploy if it understands the goals of the occupants in advance. In this article, we present a framework for reasoning about intentions using probabilistic logic programming. We employ ProbLog, a probabilistic extension to Prolog, to infer the most probable intention given observations of the actions of the agent and sensor readings of important aspects of the environment. We evaluated our model on a domain modeling a smart home. The model achieved 0.75 accuracy at full observability. The model was robust to reduced observability

    Higher-order theory of mind is especially useful in unpredictable negotiations

    Get PDF
    In social interactions, people often reason about the beliefs, goals and intentions of others. This theory of mind allows them to interpret the behavior of others, and predict how they will behave in the future. People can also use this ability recursively: they use higher-order theory of mind to reason about the theory of mind abilities of others, as in "he thinks that I don’t know that he sent me an anonymous letter". Previous agent-based modeling research has shown that the usefulness of higher-order theory of mind reasoning can be useful across competitive, cooperative, and mixed-motive settings. In this paper, we cast a new light on these results by investigating how the predictability of the environment influences the effectiveness of higher-order theory of mind. Our results show that the benefit of (higher-order) theory of mind reasoning is strongly dependent on the predictability of the environment. We consider agent-based simulations in repeated one-shot negotiations in a particular negotiation setting known as Colored Trails. When this environment is highly predictable, agents obtain little benefit from theory of mind reasoning. However, if the environment has more observable features that change over time, agents without the ability to use theory of mind experience more difficulties predicting the behavior of others accurately. This in turn allows theory of mind agents to obtain higher scores in these more dynamic environments. These results suggest that the human-specific ability for higher-order theory of mind reasoning may have evolved to allow us to survive in more complex and unpredictable environments

    Training the use of theory of mind using artificial agents

    Get PDF
    When engaging in social interaction, people rely on their ability to reason about unobservable mental content of others, which includes goals, intentions, and beliefs. This so-called theory of mind ability allows them to more easily understand, predict, and influence the behavior of others. People even use their theory of mind to reason about the theory of mind of others, which allows them to understand sentences like Alice believes that Bob does not know about the surprise party'. But while the use of higher orders of theory of mind is apparent in many social interactions, empirical evidence so far suggests that people do not use this ability spontaneously when playing strategic games, even when doing so would be highly beneficial. In this paper, we attempt to encourage participants to engage in higher-order theory of mind reasoning by letting them play a game against computational agents. Since previous research suggests that competitive games may encourage the use of theory of mind, we investigate a particular competitive game, the Mod game, which can be seen as a much larger variant of the well-known rock-paper-scissors game. By using a combination of computational agents and Bayesian model selection, we simultaneously determine to what extent people make use of higher-order theory of mind reasoning, as well as to what extent computational agents can encourage the use of higher-order theory of mind in their human opponents. Our results show that participants who play the Mod game against computational theory of mind agents adjust their level of theory of mind reasoning to that of their computer opponent. Earlier experiments with other strategic games show that participants only engage in low orders of theory of mind reasoning. Surprisingly, we find that participants who knowingly play against second- and third-order theory of mind agents apply up to fourth-order theory of mind themselves, and achieve higher scores as a result

    Reasoning about Cognitive Trust in Stochastic Multiagent Systems

    Get PDF
    We consider the setting of stochastic multiagent systems modelled as stochastic multiplayer games and formulate an automated verification framework for quantifying and reasoning about agents’ trust. To capture human trust, we work with a cognitive notion of trust defined as a subjective evaluation that agent A makes about agent B’s ability to complete a task, which in turn may lead to a decision by A to rely on B. We propose a probabilistic rational temporal logic PRTL*, which extends the probabilistic computation tree logic PCTL* with reasoning about mental attitudes (beliefs, goals, and intentions) and includes novel operators that can express concepts of social trust such as competence, disposition, and dependence. The logic can express, for example, that “agent A will eventually trust agent B with probability at least p that B will behave in a way that ensures the successful completion of a given task.” We study the complexity of the automated verification problem and, while the general problem is undecidable, we identify restrictions on the logic and the system that result in decidable, or even tractable, subproblems
    corecore