2,644 research outputs found

    A Cognitive Model for Conversation

    Get PDF
    International audienceThis paper describes a symbolic model of rational action and decision making to support analysing dialogue. The model approximates principles of behaviour from game theory, and its proof theory makes Gricean principles of cooperativity derivable when the agents’ preferences align

    A formal account of dishonesty

    Get PDF
    International audienceThis paper provides formal accounts of dishonest attitudes of agents. We introduce a propositional multi-modal logic that can represent an agent's belief and intention as well as communication between agents. Using the language, we formulate different categories of dishonesty. We first provide two different definitions of lies and provide their logical properties. We then consider an incentive behind the act of lying and introduce lying with objectives. We subsequently define bullshit, withholding information and half-truths, and analyze their formal properties. We compare different categories of dishonesty in a systematic manner, and examine their connection to deception. We also propose maxims for dishonest communication that agents should ideally try to satisfy

    Inferring trust

    Get PDF
    In this paper we discuss Liau's logic of Belief, Inform and Trust (BIT), which captures the use of trust to infer beliefs from acquired information. However, the logic does not capture the derivation of trust from other notions. We therefore suggest the following two extensions. First, like Liau we observe that trust in information from an agent depends on the topic of the information. We extend BIT with a formalization of topics which are used to infer trust in a proposition from trust in another proposition, if both propositions have the same topics. Second, for many applications, communication primitiv

    Logic and Interactive RAtionality. Yearbook 2009

    Get PDF

    Strategic Conversation

    Get PDF
    International audienceModels of conversation that rely on a strong notion of cooperation don’t apply to strategic conversation — that is, to conversation where the agents’ motives don’t align, such as courtroom cross examination and political debate. We provide a game-theoretic framework that provides an analysis of both cooperative and strategic conversation. Our analysis features a new notion of safety that applies to implicatures: an implicature is safe when it can be reliably treated as a matter of public record. We explore the safety of implicatures within cooperative and non cooperative settings. We then provide a symbolic model enabling us (i) to prove a correspondence result between a characterisation of conversation in terms of an alignment of players’ preferences and one where Gricean principles of cooperative conversation like Sincerity hold, and (ii) to show when an implicature is safe and when it is not

    Model Checking Trust-based Multi-Agent Systems

    Get PDF
    Trust has been the focus of many research projects, both theoretical and practical, in the recent years, particularly in domains where open multi-agent technologies are applied (e.g., Internet-based markets, Information retrieval, etc.). The importance of trust in such domains arises mainly because it provides a social control that regulates the relationships and interactions among agents. Despite the growing number of various multi-agent applications, they still encounter many challenges in their formal modeling and the verification of agents’ behaviors. Many formalisms and approaches that facilitate the specifications of trust in Multi-Agent Systems (MASs) can be found in the literature. However, most of these approaches focus on the cognitive side of trust where the trusting entity is normally capable of exhibiting properties about beliefs, desires, and intentions. Hence, the trust is considered as a belief of an agent (the truster) involving ability and willingness of the trustee to perform some actions for the truster. Nevertheless, in open MASs, entities can join and leave the interactions at any time. This means MASs will actually provide no guarantee about the behavior of their agents, which makes the capability of reasoning about trust and checking the existence of untrusted computations highly desired. This thesis aims to address the problem of modeling and verifying at design time trust in MASs by (1) considering a cognitive-independent view of trust where trust ingredients are seen from a non-epistemic angle, (2) introducing a logical language named Trust Computation Tree Logic (TCTL), which extends CTL with preconditional, conditional, and graded trust operators along with a set of reasoning postulates in order to explore its capabilities, (3) proposing a new accessibility relation which is needed to define the semantics of the trust modal operators. This accessibility relation is defined so that it captures the intuition of trust while being easily computable, (4) investigating the most intuitive and efficient algorithm for computing the trust set by developing, implementing, and experimenting different model checking techniques in order to compare between them in terms of memory consumption, efficiency, and scalability with regard to the number of considered agents, (5) evaluating the performance of the model checking techniques by analyzing the time and space complexity. The approach has been applied to different application domains to evaluate its computational performance and scalability. The obtained results reveal the effectiveness of the proposed approach, making it a promising methodology in practice

    Conventions and Constitutive Norms

    Get PDF
    The paper addresses a popular argument that accounts of assertion in terms of constitutive norms are incompatible with conventionalism about assertion. The argument appeals to an alleged modal asymmetry: constitutive rules are essential to the acts they characterize, and therefore the obligations they impose necessarily apply to every instance; conventions are arbitrary, and thus can only contingently regulate the practices they establish. The paper argues that this line of reasoning fails to establish any modal asymmetry, by invoking the distinction between the non-discriminating existence across possible worlds of types ('blueprints', as Rawls called them) of practices and institutions defined by constitutive rules, and the discriminating existence of those among them that are actually in force, and hence truly normative. The necessity of practices defined by constitutive rules that the argument relies on concerns the former, while conventionalist claims are only about the latter. The paper should thus contribute to get a better understanding of what social constructs conceived as defined by constitutive norms are. It concludes by suggesting considerations that are relevant to deciding whether assertion is in fact conventional

    A theory of interpersonal trust in the communication of small task-oriented groups

    Get PDF
    Thesis (M.A.)--University of Kansas, Speech and Drama, 1968

    Deception detection in dialogues

    Get PDF
    In the social media era, it is commonplace to engage in written conversations. People sometimes even form connections across large distances, in writing. However, human communication is in large part non-verbal. This means it is now easier for people to hide their harmful intentions. At the same time, people can now get in touch with more people than ever before. This puts vulnerable groups at higher risk for malevolent interactions, such as bullying, trolling, or predatory behavior. Furthermore, such growing behaviors have most recently led to waves of fake news and a growing industry of deceit creators and deceit detectors. There is now an urgent need for both theory that explains deception and applications that automatically detect deception. In this thesis I address this need with a novel application that learns from examples and detects deception reliably in natural-language dialogues. I formally define the problem of deception detection and identify several domains where it is useful. I introduce and evaluate new psycholinguistic features of deception in written dialogues for two datasets. My results shed light on the connection between language, deception, and perception. They also underline the challenges and difficulty of assessing perceptions from written text. To automatically learn to detect deception I first introduce an expressive logical model and then present a probabilistic model that simplifies the first and is learnable from labeled examples. I introduce a belief-over-belief formalization, based on Kripke semantics and situation calculus. I use an observation model to describe how utterances are produced from the nested beliefs and intentions. This allows me to easily make inferences about these beliefs and intentions given utterances, without needing to explicitly represent perlocutions. The agents’ belief states are filtered with the observed utterances, resulting in an updated Kripke structure. I then translate my formalization to a practical system that can learn from a small dataset and is able to perform well using very little structural background knowledge in the form of a relational dynamic Bayesian network structure
    • 

    corecore