39 research outputs found
The Surprise Deception Paradox
This article tackles an epistemic puzzle formulated by R. Smullyan that we call the ‘Surprise Deception Paradox'. On the morning of April 1st 1925, his brother announced that he would deceive him during the day, but apparently nothing happened. Since R. Smullyan waited all day to be deceived by some action, he was actually deceived, but by the lack of an action, that is to say by omission. Afterwards, Smullyan felt immediately puzzled: because he expected to be deceived, he was not deceived; but since he was not deceived the way he expected, he was actually deceived. We use dynamic belief revision logic to look more clearly into this puzzle. We argue that Smullyan's reasoning is not a self-referential paradox but shares common features with the more famous Surprise Examination Paradox. In Smullyan's riddle, we show that a misleading default mechanism makes R. Smullyan surprised by the deception he has been preyed to. We also use this solution to discuss whether such defaults, compared to other forms of truth-telling deception, may qualify as lies or not
Measuring vagueness and subjectivity in texts: from symbolic to neural VAGO
We present a hybrid approach to the automated measurement of vagueness and
subjectivity in texts. We first introduce the expert system VAGO, we illustrate
it on a small benchmark of fact vs. opinion sentences, and then test it on the
larger French press corpus FreSaDa to confirm the higher prevalence of
subjective markers in satirical vs. regular texts. We then build a neural clone
of VAGO, based on a BERT-like architecture, trained on the symbolic VAGO scores
obtained on FreSaDa. Using explainability tools (LIME), we show the interest of
this neural version for the enrichment of the lexicons of the symbolic version,
and for the production of versions in other languages.Comment: Paper to appear in the Proceedings of the 2023 IEEE International
Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT
Exposing propaganda: an analysis of stylistic cues comparing human annotations and machine classification
This paper investigates the language of propaganda and its stylistic
features. It presents the PPN dataset, standing for Propagandist Pseudo-News, a
multisource, multilingual, multimodal dataset composed of news articles
extracted from websites identified as propaganda sources by expert agencies. A
limited sample from this set was randomly mixed with papers from the regular
French press, and their URL masked, to conduct an annotation-experiment by
humans, using 11 distinct labels. The results show that human annotators were
able to reliably discriminate between the two types of press across each of the
labels. We propose different NLP techniques to identify the cues used by the
annotators, and to compare them with machine classification. They include the
analyzer VAGO to measure discourse vagueness and subjectivity, a TF-IDF to
serve as a baseline, and four different classifiers: two RoBERTa-based models,
CATS using syntax, and one XGBoost combining syntactic and semantic features.Comment: Paper to appear in the EACL 2024 Proceedings of the Third Workshop on
Understanding Implicit and Underspecified Language (UnImplicit 2024
alpha-Catenin cytomechanics: role in cadherin-dependent adhesion and mechanotransduction
The findings presented here demonstrate the role of alpha-catenin in cadherin-based adhesion and mechanotransduction in different mechanical contexts. Bead-twisting measurements in conjunction with imaging, and the use of different cell lines and alpha-catenin mutants reveal that the acute local mechanical manipulation of cadherin bonds triggers vinculin and actin recruitment to cadherin adhesions in an actin-and alpha-catenin-dependent manner. The modest effect of alpha-catenin on the two-dimensional binding affinities of cell surface cadherins further suggests that forceactivated adhesion strengthening is due to enhanced cadherincytoskeletal interactions rather than to alpha-catenin-dependent affinity modulation. Complementary investigations of cadherin-based rigidity sensing also suggest that, although alpha-catenin alters traction force generation, it is not the sole regulator of cell contractility on compliant cadherin-coated substrata
Facts versus Interpretations in Intelligence: A Descriptive Taxonomy for Information Evaluation
Traditionally, intelligence officers use an alphanumeric scale known as the Admiralty System to evaluate informational messages by rating the credibility of their contents and the reliability of their sources (e.g. NATO AJP-2.1, 2016). Amongst other duties, they are expected to clearly distinguish objective facts from subjective interpretations during this evaluation (NATO STANAG-2511, 2003). That being said, various experimental results show that officers are unable to properly fulfill this methodological duty (e.g. Baker et al., 1968; Kelly & Peterson, 1971; Johnson, 1973). Our explanation is that the extant scale, which is evaluative by nature, does not allow them to endorse a more objective, that is to say descriptive, perspective on information. In this article, we aim to help enforce the facts versus interpretations recommendation in the intelligence domain. By extracting the descriptive dimensions that underlie the scale, and by grouping them by linguistic directionality (e.g. Teigen & Brun, 1995; Mandel et al., 2022), we introduce a taxonomy to categorize intelligence messages more objectively. This taxonomy is fine-grained: it integrates messages which are informative or deceptive in the classical sense (e.g. misinformation, lying), but also more borderline messages, such as omissions and half-truths, which rely on the use of linguistic vagueness (following Égré & Icard, 2018; Icard et al., 2022). By putting descriptive lenses on information evaluation, we seek to provide new categories to help officers make more acute evaluations of information
Mensonge, tromperie et omission stratégique : définition et évaluation
This thesis aims at improving the definition and evaluation of deceptive strategies that can manipulate information. Using conceptual, formal and experimental resources, I analyze three deceptive strategies, some of which are standard cases of deception, in particular lies, and others non-standard cases of deception, in particular misleading inferences and strategic omissions. Firstly, I consider definitional aspects. I deal with the definition of lying, and present new empirical data supporting the traditional account of the notion (called the ‘subjective definition’), contradicting recent claims in favour of a falsity clause (leading to an ‘objective definition’). Next, I analyze non-standard cases of deception through the categories of misleading defaults and omissions of information. I use qualitative belief revision to examine a puzzle due to R. Smullyan about the possibility of triggering a default inference to deceive an addressee by omission. Secondly, I consider evaluative aspects. I take the perspective of military intelligence data processing to offer a typology of informational messages based on the descriptive dimensions of truth (for message contents) and honesty (for message sources). I also propose a numerical procedure to evaluate these messages based on the evaluative dimensions of credibility (for truth) and reliability (for honesty). Quantitative plausibility models are used to capture degrees of prior credibility of messages, and dynamic rules are defined to update these degrees depending on the reliability of the source.Cette thèse vise à mieux définir ainsi qu'à mieux évaluer les stratégies de tromperie et de manipulation de l'information. Des ressources conceptuelles, formelles et expérimentales sont combinées en vue d'analyser des cas standards de tromperie, tels que le mensonge, mais aussi non-standards, tels que les inférences trompeuses et l'omission stratégique. Les aspects définitionnels sont traités en premier. J'analyse la définition traditionnelle du mensonge en présentant des résultats empiriques en faveur de cette définition classique (dite 'définition subjective'), contre certains arguments visant à défendre une 'définition objective' par l'ajout d'une condition de fausseté. J'examine ensuite une énigme logique issue de R. Smullyan, et qui porte sur un cas limite de tromperie basé sur une règle d'inférence par défaut pour tromper un agent par omission. Je traite ensuite des aspects évaluatifs. Je pars du cadre existant pour l'évaluation du renseignement et propose une typologie des messages fondée sur les dimensions descriptives de vérité (pour leur contenu) et d'honnêteté (pour leur source). Je présente ensuite une procédure numérique pour l'évaluation des messages basée sur les dimensions évaluatives de crédibilité (pour la vérité) et de fiabilité (pour l'honnêteté). Des modèles numériques de plausibilité servent à capturer la crédibilité a priori des messages puis des règles numériques sont proposées pour actualiser ces degrés selon la fiabilité de la source
Mensonge, tromperie et omission stratégique : définition et évaluation
This thesis aims at improving the definition and evaluation of deceptive strategies that can manipulate information. Using conceptual, formal and experimental resources, I analyze three deceptive strategies, some of which are standard cases of deception, in particular lies, and others non-standard cases of deception, in particular misleading inferences and strategic omissions. Firstly, I consider definitional aspects. I deal with the definition of lying, and present new empirical data supporting the traditional account of the notion (called the ‘subjective definition’), contradicting recent claims in favour of a falsity clause (leading to an ‘objective definition’). Next, I analyze non-standard cases of deception through the categories of misleading defaults and omissions of information. I use qualitative belief revision to examine a puzzle due to R. Smullyan about the possibility of triggering a default inference to deceive an addressee by omission. Secondly, I consider evaluative aspects. I take the perspective of military intelligence data processing to offer a typology of informational messages based on the descriptive dimensions of truth (for message contents) and honesty (for message sources). I also propose a numerical procedure to evaluate these messages based on the evaluative dimensions of credibility (for truth) and reliability (for honesty). Quantitative plausibility models are used to capture degrees of prior credibility of messages, and dynamic rules are defined to update these degrees depending on the reliability of the source.Cette thèse vise à mieux définir ainsi qu'à mieux évaluer les stratégies de tromperie et de manipulation de l'information. Des ressources conceptuelles, formelles et expérimentales sont combinées en vue d'analyser des cas standards de tromperie, tels que le mensonge, mais aussi non-standards, tels que les inférences trompeuses et l'omission stratégique. Les aspects définitionnels sont traités en premier. J'analyse la définition traditionnelle du mensonge en présentant des résultats empiriques en faveur de cette définition classique (dite 'définition subjective'), contre certains arguments visant à défendre une 'définition objective' par l'ajout d'une condition de fausseté. J'examine ensuite une énigme logique issue de R. Smullyan, et qui porte sur un cas limite de tromperie basé sur une règle d'inférence par défaut pour tromper un agent par omission. Je traite ensuite des aspects évaluatifs. Je pars du cadre existant pour l'évaluation du renseignement et propose une typologie des messages fondée sur les dimensions descriptives de vérité (pour leur contenu) et d'honnêteté (pour leur source). Je présente ensuite une procédure numérique pour l'évaluation des messages basée sur les dimensions évaluatives de crédibilité (pour la vérité) et de fiabilité (pour l'honnêteté). Des modèles numériques de plausibilité servent à capturer la crédibilité a priori des messages puis des règles numériques sont proposées pour actualiser ces degrés selon la fiabilité de la source
Lying and Vagueness
International audienceVagueness is a double-edged sword in relation to lying and truthfulness. In situations of in which a cooperative speaker is uncertain about the world, vagueness offers a resource for truthfulness: it avoids committing oneself to more precise utterances that would be either false or unjustifiably true, and it is arguably an optimal solution to satisfy the Gricean maxims of Quality and Quantity. In situations in which a noncooperative speaker is well-informed about the world, on the other hand, vagueness can be a deception mechanism. We distinguish two cases of that sort: cases in which the speaker is deliberately imprecise in order to hide information from the hearer; and cases in which the speaker exploits the semantic indeterminacy of vague predicates to produce utterances that are true in one sense, but false in another. Should such utterances, which we call half-truths, be considered lies? The answer, we argue, depends on the context: the lack of unequivocal truth is not always sufficient to declare falsity
VAGO: un outil en ligne de mesure du vague et de la subjectivité
International audienceVAGO is an online tool relying on an annotated lexical database and expert rules to provide a measure of vagueness and subjectivity in textual documents. The development of VAGO is the result of the cooperation between the INSTITUT JEAN-NICOD (UMR 8129 of CNRS) and the MONDECA company. VAGO is based on a four-fold typology of vague expressions, distinguishing generality, approximation, one-dimensional vagueness, and multidimensional vagueness. %Currently, the database consists mostly of adjectives and a limited lexicon for English and French. In this demonstration, (i) we introduce the user to the motivations behind the VAGO typology, (ii) we make explicit the technological chain used for the implementation of VAGO, and (iii) we show how VAGO can help in the detection of false or unreliable information. Online demo: \url{https://youtu.be/L6cc05SlA5E}VAGO est un outil en ligne de mesure du vague et de la subjectivité dans le discours, fondé sur une base de données lexicale annotée ainsi que sur des règles expertes. VAGO est développé dans le cadre d'une coopération entre l'INSTITUT JEAN-NICOD (UMR 8129 du CNRS) et la société MONDECA. VAGO repose sur une quadruple typologie des expressions vagues, distinguant la généralité, l'approximation, le vague unidimensionnel et le vague multidimensionnel. Nous utilisons la typologie pour étiqueter les expressions comme marqueurs de subjectivité ou d'objectivité. Dans cette démonstration, nous présentons (i) les motivations de la typologie de VAGO, (ii) la chaîne technologique mise en place dans la réalisation de VAGO, et (iii) l'utilisation de VAGO pour l'aide à la détection d'informations fausses ou peu fiables. Vidéo de démonstration : https://youtu.be/L6cc05SlA5
Measuring vagueness and subjectivity in texts: from symbolic to neural VAGO
International audienceWe present a hybrid approach to the automated measurement of vagueness and subjectivity in texts. We first introduce the expert system VAGO, we illustrate it on a small benchmark of fact vs. opinion sentences, and then test it on the larger French press corpus FreSaDa to confirm the higher prevalence of subjective markers in satirical vs. regular texts. We then build a neural clone of VAGO, based on a BERT-like architecture, trained on the symbolic VAGO scores obtained on FreSaDa. Using explainability tools (LIME), we show the interest of this neural version for the enrichment of the lexicons of the symbolic version, and for the production of versions in other languages