3 research outputs found

    Studying Dishonest Intentions in Brazilian Portuguese Texts

    Full text link
    Previous work in the social sciences, psychology and linguistics has show that liars have some control over the content of their stories, however their underlying state of mind may "leak out" through the way that they tell them. To the best of our knowledge, no previous systematic effort exists in order to describe and model deception language for Brazilian Portuguese. To fill this important gap, we carry out an initial empirical linguistic study on false statements in Brazilian news. We methodically analyze linguistic features using a deceptive news corpus, which includes both fake and true news. The results show that they present substantial lexical, syntactic and semantic variations, as well as punctuation and emotion distinctions

    Adversarially Guided Self-Play for Adopting Social Conventions

    Full text link
    Robotic agents must adopt existing social conventions in order to be effective teammates. These social conventions, such as driving on the right or left side of the road, are arbitrary choices among optimal policies, but all agents on a successful team must use the same convention. Prior work has identified a method of combining self-play with paired input-output data gathered from existing agents in order to learn their social convention without interacting with them. We build upon this work by introducing a technique called Adversarial Self-Play (ASP) that uses adversarial training to shape the space of possible learned policies and substantially improves learning efficiency. ASP only requires the addition of unpaired data: a dataset of outputs produced by the social convention without associated inputs. Theoretical analysis reveals how ASP shapes the policy space and the circumstances (when behaviors are clustered or exhibit some other structure) under which it offers the greatest benefits. Empirical results across three domains confirm ASP's advantages: it produces models that more closely match the desired social convention when given as few as two paired datapoints.Comment: 9 pages, 8 figure

    Weakest Preexpectation Semantics for Bayesian Inference

    Full text link
    We present a semantics of a probabilistic while-language with soft conditioning and continuous distributions which handles programs diverging with positive probability. To this end, we extend the probabilistic guarded command language (pGCL) with draws from continuous distributions and a score operator. The main contribution is an extension of the standard weakest preexpectation semantics to support these constructs. As a sanity check of our semantics, we define an alternative trace-based semantics of the language, and show that the two semantics are equivalent. Various examples illustrate the applicability of the semantics
    corecore