5 research outputs found

    Logic, Probability and Action: A Situation Calculus Perspective

    Get PDF
    The unification of logic and probability is a long-standing concern in AI, and more generally, in the philosophy of science. In essence, logic provides an easy way to specify properties that must hold in every possible world, and probability allows us to further quantify the weight and ratio of the worlds that must satisfy a property. To that end, numerous developments have been undertaken, culminating in proposals such as probabilistic relational models. While this progress has been notable, a general-purpose first-order knowledge representation language to reason about probabilities and dynamics, including in continuous settings, is still to emerge. In this paper, we survey recent results pertaining to the integration of logic, probability and actions in the situation calculus, which is arguably one of the oldest and most well-known formalisms. We then explore reduction theorems and programming interfaces for the language. These results are motivated in the context of cognitive robotics (as envisioned by Reiter and his colleagues) for the sake of concreteness. Overall, the advantage of proving results for such a general language is that it becomes possible to adapt them to any special-purpose fragment, including but not limited to popular probabilistic relational models

    Deception detection in dialogues

    Get PDF
    In the social media era, it is commonplace to engage in written conversations. People sometimes even form connections across large distances, in writing. However, human communication is in large part non-verbal. This means it is now easier for people to hide their harmful intentions. At the same time, people can now get in touch with more people than ever before. This puts vulnerable groups at higher risk for malevolent interactions, such as bullying, trolling, or predatory behavior. Furthermore, such growing behaviors have most recently led to waves of fake news and a growing industry of deceit creators and deceit detectors. There is now an urgent need for both theory that explains deception and applications that automatically detect deception. In this thesis I address this need with a novel application that learns from examples and detects deception reliably in natural-language dialogues. I formally define the problem of deception detection and identify several domains where it is useful. I introduce and evaluate new psycholinguistic features of deception in written dialogues for two datasets. My results shed light on the connection between language, deception, and perception. They also underline the challenges and difficulty of assessing perceptions from written text. To automatically learn to detect deception I first introduce an expressive logical model and then present a probabilistic model that simplifies the first and is learnable from labeled examples. I introduce a belief-over-belief formalization, based on Kripke semantics and situation calculus. I use an observation model to describe how utterances are produced from the nested beliefs and intentions. This allows me to easily make inferences about these beliefs and intentions given utterances, without needing to explicitly represent perlocutions. The agents’ belief states are filtered with the observed utterances, resulting in an updated Kripke structure. I then translate my formalization to a practical system that can learn from a small dataset and is able to perform well using very little structural background knowledge in the form of a relational dynamic Bayesian network structure

    Foundations of Trusted Autonomy

    Get PDF
    Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie
    corecore