5 research outputs found
The polysemy of ‘fallacy’—or ‘bias’, for that matter
Starting with a brief overview of current usages (Sect. 2), this paper offers some constituents of a use-based analysis of ‘fallacy’, listing 16 conditions that have, for the most part implicitly, been discussed in the literature (Sect. 3). Our thesis is that at least three related conceptions of ‘fallacy’ can be identified. The 16 conditions thus serve to “carve out” a semantic core and to distinguish three core-specifications. As our discussion suggests, these specifications can be related to three normative positions in the philosophy of human reasoning: the meliorist, the apologist, and the panglossian (Sect. 4). Seeking to make these conditions available for scholarly discussion, this analysis-sketch should not be viewed as final or exhaustive
Recommended from our members
Developing fake news immunity: fallacies as misinformation triggers during the pandemic
Misinformation constitutes one of the main challenges to counter the infodemic: misleading news, even if not blatantly false, can cause harm especially in crisis scenarios such as the pandemic. Due to the fast proliferation of information across digital media, human fact-checkers struggle to keep up with fake news, while automatic factcheckers are not able to identify the grey area of misinformation. We, thus, propose to reverse engineer the manipulation of information offering citizens the means to become their own fact-checkers through digital literacy and critical thinking. Through a corpus analysis of fact-checked news about COVID-19, we identify 10 fallacies – arguments which seem valid but are not – that systematically trigger misinformation and offer a systematic procedure to identify them. Next to fallacies, we observe the types of sources associated to (mis-/dis-)information in our dataset as well as the type of claims making up the headlines. The observation of these three levels of analysis reveals a misinformation ecosystem where developing the audience’s digital literacy is necessary to guarantee fake news immunity
Recommended from our members
Computational Models of Argument Structure and Argument Quality for Understanding Misinformation
With the continuing spread of misinformation and disinformation online, it is of increasing importance to develop combating mechanisms at scale in the form of automated systems that can find checkworthy information, detect fallacious argumentation of online content, retrieve relevant evidence from authoritative sources and analyze the veracity of claims given the retrieved evidence. The robustness and applicability of these systems depend on the availability of annotated resources to train machine learning models in a supervised fashion, as well as machine learning models that capture patterns beyond domain-specific lexical clues or genre-specific stylistic insights. In this thesis, we investigate the role of models for argument structure and argument quality in improving tasks relevant to fact-checking and furthering our understanding of misinformation and disinformation. We contribute to argumentation mining, misinformation detection, and fact-checking by releasing multiple annotated datasets, developing unified models across datasets and task formulations, and analyzing the vulnerabilities of such models in adversarial settings.
We start by studying the argument structure's role in two downstream tasks related to fact-checking. As it is essential to differentiate factual knowledge from opinionated text, we develop a model for detecting the type of news articles (factual or opinionated) using highly transferable argumentation-based features. We also show the potential of argumentation features to predict the checkworthiness of information in news articles and provide the first multi-layer annotated corpus for argumentation and fact-checking.
We then study qualitative aspects of arguments through models for fallacy recognition. To understand the reasoning behind checkworthiness and the relation of argumentative fallacies to fake content, we develop an annotation scheme of fallacies in fact-checked content and investigate avenues for automating the detection of such fallacies considering single- and multi-dataset training. Using instruction-based prompting, we introduce a unified model for recognizing twenty-eight fallacies across five fallacy datasets. We also use this model to explain the checkworthiness of statements in two domains.
Next, we show our models for end-to-end fact-checking of statements that include finding the relevant evidence document and sentence from a collection of documents and then predicting the veracity of the given statements using the retrieved evidence. We also analyze the robustness of end-to-end fact extraction and verification by generating adversarial statements and addressing areas for improvements for models under adversarial attacks. Finally, we show that evidence-based verification is essential for fine-grained claim verification by modeling the human-provided justifications with the gold veracity labels
1995-1999 Brock News
A compilation of the administration newspaper, Brock News, for the years 1995 through 1999. It had previously been titled Brock Campus News and preceding that, The Blue Badger