58 research outputs found

    Tracking Uncertainty Propagation from Model to Formalization: Illustration on Trust Assessment

    Get PDF
    International audienceThis paper investigates the use of the URREF ontology to characterize and track uncertainties arising within the modeling and formalization phases. Estimation of trust in reported information, a real-world problem of interest to practitioners in the field of security, was adopted for illustration purposes. A functional model of trust was developed to describe the analysis of reported information, and it was implemented with belief functions. When assessing trust in reported information, the uncertainty arises not only from the quality of sources or information content, but also due to the inability of models to capture the complex chain of interactions leading to the final outcome and to constraints imposed by the representation formalism. A primary goal of this work is to separate known approximations, imperfections and inaccuracies from potential errors, while explicitly tracking the uncertainty from the modeling to the formalization phases. A secondary goal is to illustrate how criteria of the URREF ontology can offer a basis for analyzing performances of fusion systems at early stages, ahead of implementation. Ideally, since uncertainty analysis runs dynamically, it can use the existence or absence of observed states and processes inducing uncertainty to adjust the tradeoff between precision and performance of systems on-the-fly

    Liar liar, pants on fire; or how to use subjective logic and argumentation to evaluate information from untrustworthy sources

    No full text
    This paper presents a non-prioritized belief change operator, designed specifically for incorporating new information from many heterogeneous sources in an uncertain environment. We take into account that sources may be untrustworthy and provide a principled method for dealing with the reception of contradictory information

    An informant-based approach to argument strength in Defeasible Logic Programming

    Get PDF
    This work formalizes an informant-based structured argumentation approach in a multi-agent setting, where the knowledge base of an agent may include information provided by other agents, and each piece of knowledge comes attached with its informant. In that way, arguments are associated with the set of informants corresponding to the information they are built upon. Our approach proposes an informant-based notion of argument strength, where the strength of an argument is determined by the credibility of its informant agents. Moreover, we consider that the strength of an argument is not absolute, but it is relative to the resolution of the conflicts the argument is involved in. In other words, the strength of an argument may vary from one context to another, as it will be determined by comparison to its attacking arguments (respectively, the arguments it attacks). Finally, we equip agents with the means to express reasons for or against the consideration of any piece of information provided by a given informant agent. Consequently, we allow agents to argue about the arguments’ strength through the construction of arguments that challenge (respectively, defeat) or are in favour of their informant agents.Fil: Cohen, Andrea. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Gottifredi, Sebastián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Tamargo, Luciano Héctor. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: García, Alejandro Javier. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Automatic fake news detection on Twitter

    Get PDF
    Nowadays, information is easily accessible online, from articles by reliable news agencies to reports from independent reporters, to extreme views published by unknown individuals. Moreover, social media platforms are becoming increasingly important in everyday life, where users can obtain the latest news and updates, share links to any information they want to spread, and post their own opinions. Such information may create difficulties for information consumers as they try to distinguish fake news from genuine news. Indeed, users may not be necessarily aware that the information they encounter is false and may not have the time and effort to fact-check all the claims and information they encounter online. With the amount of information created and shared daily, it is also not feasible for journalists to manually fact-check every published news article, sentence or tweet. Therefore, an automatic fact-checking system that identifies the check-worthy claims and tweets, and then fact-checks these identified check-worthy claims and tweets can help inform the public of fake news circulating online. Existing fake news detection systems mostly rely on the machine learning models’ computational power to automatically identify fake news. Some researchers have focused on extracting the semantic and contextual meaning from news articles, statements, and tweets. These methods aim to identify fake news by analysing the differences in writing style between fake news and factual news. On the other hand, some researchers investigated using social networks information to detect fake news accurately. These methods aim to distinguish fake news from factual news based on the spreading pattern of news, and the statistical information of the engaging users with the propagated news. In this thesis, we propose a novel end-to-end fake news detection framework that leverages both the textual features and social network features, which can be extracted from news, tweets, and their engaging users. Specifically, our proposed end-to-end framework is able to process a Twitter feed, identify check-worthy tweets and sentences using textual features and embedded entity features, and fact-check the claims using previously unexplored information, such as existing fake news collections and user network embeddings. Our ultimate aim is to rank tweets and claims based on their check-worthiness to focus the available computational power on fact-checking the tweets and claims that are important and potentially fake. In particular, we leverage existing fake news collections to identify recurring fake news, while we explore the Twitter users’ engagement with the check-worthy news to identify fake news that are spreading on Twitter. To identify fake news effectively, we first propose the fake news detection framework (FNDF), which consists of the check-worthiness identification phase and the fact-checking phase. These two phases are divided into three tasks: Phase 1 Task 1: check-worthiness identification task; Phase 2 Task 2: recurring fake news identification task; and Phase 2 Task 3: social network structure-assisted fake news detection task. We conduct experiments on two large publicly available datasets, namely the MM-COVID and the stance detection (SD) datasets. The experimental results show that our proposed framework, FNDF, can indeed identify fake news more effectively than the existing SOTA models, with 23.2% and 4.0% significant increases in F1 scores on the two tested datasets, respectively. To identify the check-worthy tweets and claims effectively, we incorporate embedded entities with language representations to form a vector representation of a given text, to identify if the text is check-worthy or not. We conduct experiments using three publicly available datasets, namely, the CLEF 2019, 2020 CheckThat! Lab check-worthy sentence detection dataset, and the CLEF 2021 CheckThat! Lab check-worthy tweets detection dataset. The experimental results show that combining entity representations and language model representations enhance the language model’s performance in identifying check-worthy tweets and sentences. Specifically, combining embedded entities with the language model results in as much as 177.6% increase in MAP on ranking check-worthy tweets,and a 92.9% increase in ranking check-worthy sentences. Moreover, we conduct an ablation study on the proposed end-to-end framework, FNDF, and show that including a model for identifying check-worthy tweets and claims in our end-to-end framework, can significantly increase the F1 score by as much as 14.7%, compared to not including this model in our framework. To identify recurring fake news effectively, we propose an ensemble model of the BM25 scores and the BERT language model. Experiments conducted on two datasets, namely, the WSDM Cup 2019 Fake News Challenge dataset, and the MM-COVID dataset. Experimental results show that enriching the BERT language model with the BM25 scores can help the BERT model identify fake news significantly more accurately by 4.4%. Moreover, the ablation study on the end-to-end fake news detection framework, FNDF, shows that including the identification of recurring fake news model in our proposed framework results in significant increase in terms of F1 score by as much as 15.5%, compared to not including this task in our framework. To leverage the user network structure in detecting fake news, we first obtain user embed- dings from unsupervised user network embeddings based on their friendship or follower connections on Twitter. Next, we use the user embeddings of the users who engaged with the news to represent a check-worthy tweet/claim, thus predicting whether it is fake news. Our results show that using user network embeddings to represent check-worthy tweets/sentences significantly outperforms the SOTA model, which uses language models to represent the tweets/sentences and complex networks requiring handcrafted features, by 12.0% in terms of the F1 score. Furthermore, including the user network assisted fake news detection model in our end-to-end framework, FNDF, significantly increase the F1 score by as much as 29.3%. Overall, this thesis shows that an end-to-end fake news detection framework, FNDF, that identifies check-worthy tweets and claims, then fact-checks the check-worthy tweets and claims, by identifying recurring fake news and leveraging the social network users’ connections, can effectively identify fake news online

    False textual information detection, a deep learning approach

    Get PDF
    Many approaches exist for analysing fact checking for fake news identification, which is the focus of this thesis. Current approaches still perform badly on a large scale due to a lack of authority, or insufficient evidence, or in certain cases reliance on a single piece of evidence. To address the lack of evidence and the inability of models to generalise across domains, we propose a style-aware model for detecting false information and improving existing performance. We discovered that our model was effective at detecting false information when we evaluated its generalisation ability using news articles and Twitter corpora. We then propose to improve fact checking performance by incorporating warrants. We developed a highly efficient prediction model based on the results and demonstrated that incorporating is beneficial for fact checking. Due to a lack of external warrant data, we develop a novel model for generating warrants that aid in determining the credibility of a claim. The results indicate that when a pre-trained language model is combined with a multi-agent model, high-quality, diverse warrants are generated that contribute to task performance improvement. To resolve a biased opinion and making rational judgments, we propose a model that can generate multiple perspectives on the claim. Experiments confirm that our Perspectives Generation model allows for the generation of diverse perspectives with a higher degree of quality and diversity than any other baseline model. Additionally, we propose to improve the model's detection capability by generating an explainable alternative factual claim assisting the reader in identifying subtle issues that result in factual errors. The examination demonstrates that it does indeed increase the veracity of the claim. Finally, current research has focused on stance detection and fact checking separately, we propose a unified model that integrates both tasks. Classification results demonstrate that our proposed model outperforms state-of-the-art methods

    Towards a New Paradigm on Post-Truth: Discourse and Affect

    Get PDF
    In this study, I re-frame the concept of post-truth as political discourse, dissociating it from the mainstream conceptualisation with misinformation and supremacy of emotionality influencing public opinion. This study performs four tasks. First, I steer the discussion on post-truth away from the ‘misinformation’ and ‘objective facts’ dichotomy, disassociating it from the overemphasis on misinformation and the fixation of ‘post’ to be something after truth which is inherently negative. Second, I delineate a theoretical framework contextualised within the ambit of political theory, and ideology and discourse analysis, to conceptualise post-truth discourse. Third, I develop an operational definition of post-truth discourse to be tested empirically. Fourth, I apply my theory to Pakistan as a case study, whereby I implement two empirical analyses: the first identifying post-truth discourse in newspaper reporting, and the second an experimental design investigating the effect of post-truth discourses on political behaviour. I argue that post-truth discourse has significant effects on political ideologies' polarisation, manipulation of public policy, and endangering democratic institutions' trust. Among the main implications of my research, I describe how these behaviours have the potential to start democratic backsliding processes or undermine democratic institutions. Furthermore, I highlight the far-reaching implications of conceptualising post-truth as a political discourse for developing countries where political polarisation can have striking impacts on the field, such as elections, regime stability, and regime-society relationship. This study has attempted to re-conceptualise post-truth in a manner where the novelty of post-truth is questioned, the element of truthfulness is examined, and the conceptualisation of post-truth discourse is empirically tested. The shift in thinking about post-truth as a political discourse advances our understanding of post-truth and expands the scope of empirical work in the field. It provides us with new tools with which researchers can dissect the populist discourses of our times

    Dissecting Discrimination

    Get PDF
    This Open-Access-book examines the phenomenon of discrimination using a descriptive approach. Discrimination is omnipresent, whether it is people who discriminate against other people or, more recently, also machines that discriminate against people. The first part of the analysis employs decision theory on discrimination, leading to two fundamental subtypes: taste-based discrimination and statistical discrimination. The second part links taste-based discrimination to social identity theory, demonstrates that not all taste-based discrimination is ultimately statistical discrimination, and reveals the evolutionary origins of our tastes. The third part surveys how people get their beliefs for statistical discrimination and thereby shows that they often deviate from Bayesianism: they have inherent prior beliefs and do not exclusively update their beliefs according to Bayes’ law. Additionally, the analysis of belief formation highlights the importance of the learning environment. The last part reassembles the previously dissected aspects of discrimination, presents a new descriptive model of discrimination, and lists five implications for a normative theory of discrimination

    Knowledge Resistance in High-Choice Information Environments

    Get PDF
    This book offers a truly interdisciplinary exploration of our patterns of engagement with politics, news, and information in current high-choice information environments. Putting forth the notion that high-choice information environments may contribute to increasing misperceptions and knowledge resistance rather than greater public knowledge, the book offers insights into the processes that influence the supply of misinformation and factors influencing how and why people expose themselves to and process information that may support or contradict their beliefs and attitudes. A team of authors from across a range of disciplines address the phenomena of knowledge resistance and its causes and consequences at the macro- as well as the micro-level. The chapters take a philosophical look at the notion of knowledge resistance, before moving on to discuss issues such as misinformation and fake news, psychological mechanisms such as motivated reasoning in processes of selective exposure and attention, how people respond to evidence and fact-checking, the role of political partisanship, political polarization over factual beliefs, and how knowledge resistance might be counteracted. This book will have a broad appeal to scholars and students interested in knowledge resistance, primarily within philosophy, psychology, media and communication, and political science, as well as journalists and policymakers

    Power and its Logic: Mastering Politics

    Get PDF
    Power is the essence of politics. Whoever seeks to understand and master it must understand its logic. Drawing on two decades of international experience in political consulting, Dominik Meier and Christian Blum give profound and honest insights into the inner workings of power. Introducing their Power Leadership Approach, the authors provide a conceptual analysis of power and present the tools to successfully exercise it in the political domain. "Power and its Logic" is a guidebook for politicians, business leaders, civil society pioneers, public affairs consultants and for every citizen who wants to understand the unwritten rules of politics
    • …
    corecore