11,735 research outputs found

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications

    Towards a theory of deception

    Get PDF
    This paper proposes an equilibrium approach to deception where deception is defined to be the process by which actions are chosen to induce erroneous inferences so as to take advantage of them. Specifically, we introduce a framework with boundedly rational players in which agents make inferences based on a coarse information about others' behaviors: Agents are assumed to know only the average reaction function of other agents over groups of situations. Equilibrium requires that the coarse information available to agents is correct, and that inferences and optimizations are made based on the simplest theories compatible with the available information. We illustrate the phenomenon of deception and how reputation concerns may arise even in zero-sum games in which there is no value to commitment. We further illustrate how the possibility of deception affects standard economic insights through a number of stylized applications including a monitoring game and two simple bargaining games. The approach can be viewed as formalizing into a game theoretic setting a well documented bias in social psychology, the Fundamental Attribution Error.deception ; game theory ; fundamental attribution error

    Coordination in software agent systems

    Get PDF

    Beliefs and Conflicts in a Real World Multiagent System

    Get PDF
    In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support for conflict resolution in the analysis of the validity of the candidate beams to be produced in the CERN particle accelerators. This CERN multiagent system contains a higher hierarchy agent, the Specialist agent, which makes use of meta-knowledge (on how the conflicting beliefs have been produced by the other agents) in order to detect which beliefs should be abandoned. Upon solving a conflict, the Specialist instructs the involved agents to revise their beliefs accordingly. Conflicts in the problem domain are mapped into conflicting beliefs of the distributed belief revision system, where they can be handled by proven formal methods. This technique builds on well established concepts and combines them in a new way to solve important problems. We find this approach generally applicable in several domains

    Towards a theory of deception

    Get PDF
    This paper proposes an equilibrium approach to deception where deception is defined to be the process by which actions are chosen to induce erroneous inferences so as to take advantage of them. Specifically, we introduce a framework with boundedly rational players in which agents make inferences based on a coarse information about others' behaviors: Agents are assumed to know only the average reaction function of other agents over groups of situations. Equilibrium requires that the coarse information available to agents is correct, and that inferences and optimizations are made based on the simplest theories compatible with the available information. We illustrate the phenomenon of deception and how reputation concerns may arise even in zero-sum games in which there is no value to commitment. We further illustrate how the possibility of deception affects standard economic insights through a number of stylized applications including a monitoring game and two simple bargaining games. The approach can be viewed as formalizing into a game theoretic setting a well documented bias in social psychology, the Fundamental Attribution Error.Cet article propose une approche d'Ă©quilibre du phĂ©nomĂšne de tromperie oĂč la tromperie est dĂ©finie comme le processus par lequel des actions sont choisies pour induire des infĂ©rences erronĂ©es afin d'en tirer avantage. De maniĂšre plus prĂ©cise, nous introduisons un cadre avec des joueurs de rationalitĂ© limitĂ©e dans lequel les agents Ă©tablissent leurs infĂ©rences sur la base d'une information grossiĂšre quant au comportement des autres agents : les agents connaissent seulement la moyenne du comportement des autres agents sur un ensemble de noeuds de dĂ©cisions. L'Ă©quilibre requiert que l'information accessible aux agents est correcte et que les infĂ©rences et optimisations se font sur la base des thĂ©ories les plus simples compatibles avec cette information. Nous illustrons le phĂ©nomĂšne de tromperie et comment des considĂ©rations de rĂ©putation peuvent Ă©merger mĂȘme dans des situations de jeux Ă  somme nulle oĂč il n'y a pas de valeur Ă  l'engagement. Nous illustrons Ă©galement comment la possibilitĂ© de tromperie affecte un certain nombre d'analyses Ă©conomiques traditionnelles comme les schĂ©mas incitatifs dans les relations employeur/employĂ© ou les dispositions aux concessions dans des contextes de nĂ©gociation. L'approche peut ĂȘtre interprĂ©tĂ©e comme formalisant dans un cadre de thĂ©orie des jeux un biais psychologique rĂ©pertoriĂ© sous le nom de "Fundamental Attribution Error"
    • 

    corecore