1,168 research outputs found
Online Misinformation: Challenges and Future Directions
Misinformation has become a common part of our digital media environments and it is compromising the ability of our societies to form informed opinions. It generates misperceptions, which have affected the decision making processes in many domains, including economy, health, environment, and elections, among others. Misinformation and its generation, propagation, impact, and management is being studied through a variety of lenses (computer science, social science, journalism, psychology, etc.) since it widely affects multiple aspects of society. In this paper we analyse the phenomenon of misinformation from a technological point of view.We study the current socio-technical advancements towards addressing the problem, identify some of the key limitations of current technologies, and propose some ideas to target such limitations. The goal of this position paper is to reflect on the current state of the art and to stimulate discussions on the future design and development of algorithms, methodologies, and applications
Social media and investigative journalism in South Africa: The extent to which investigative journalists in South Africa use social media to further their investigations, the impact and its pitfalls.
This research explores the level to which investigative journalists in South Africa use social media applications to further their investigations. As social media applications such as Facebook and Twitter are instant tools for news agencies and reporters, investigative journalists are also benefitting from the use of these tools. This paper will explore how these tools are used by investigative journalists in South Africa, for what purposes and explore the challenges that may arise. Emphasis is placed on Facebook and Twitter as the research found that these social media applications are the most commonly used applications by investigative journalists in South Africa. This research is located within two theories namely Jurgen Habermas’s (1989) theory of the public sphere and John Arundel Barnes’s (1954) social network theory. These theories explore how social media applications create networks that are beneficial for investigative journalists for a variety of reasons. The discussions that take place on social media applications contribute to the digital public sphere – a platform where people can come together to discuss issues of relevance to them. Investigative journalists form part of the digital public sphere and this adds value to investigations. This research further delves into the change of relationship investigative newsrooms have with their ‘audience’ because of social media applications. Social media applications, such as Twitter and Facebook, have led to consumers of news no longer being passive viewers or listeners of news, but rather having an opportunity to voice their opinions, provide feedback and share information that influences investigations. Both quantitative and qualitative research methods were used to ascertain which investigative journalists are using social media in their investigations followed by in- depth interviews across the country
Disruption and Deception in Crowdsourcing: Towards a Crowdsourcing Risk Framework
While crowdsourcing has become increasingly popular among organizations, it also has become increasingly susceptible to unethical and malicious activities. This paper discusses recent examples of disruptive and deceptive efforts on crowdsourcing sites, which impacted the confidentiality, integrity, and availability of the crowdsourcing efforts’ service, stakeholders, and data. From these examples, we derive an organizing framework of risk types associated with disruption and deception in crowdsourcing based on commonalities among incidents. The framework includes prank activities, the intentional placement of false information, hacking attempts, DDoS attacks, botnet attacks, privacy violation attempts, and data breaches. Finally, we discuss example controls that can assist in identifying and mitigating disruption and deception risks in crowdsourcing
The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans
A new era of Information Warfare has arrived. Various actors, including
state-sponsored ones, are weaponizing information on Online Social Networks to
run false information campaigns with targeted manipulation of public opinion on
specific topics. These false information campaigns can have dire consequences
to the public: mutating their opinions and actions, especially with respect to
critical world events like major elections. Evidently, the problem of false
information on the Web is a crucial one, and needs increased public awareness,
as well as immediate attention from law enforcement agencies, public
institutions, and in particular, the research community. In this paper, we make
a step in this direction by providing a typology of the Web's false information
ecosystem, comprising various types of false information, actors, and their
motives. We report a comprehensive overview of existing research on the false
information ecosystem by identifying several lines of work: 1) how the public
perceives false information; 2) understanding the propagation of false
information; 3) detecting and containing false information on the Web; and 4)
false information on the political stage. In this work, we pay particular
attention to political false information as: 1) it can have dire consequences
to the community (e.g., when election results are mutated) and 2) previous work
show that this type of false information propagates faster and further when
compared to other types of false information. Finally, for each of these lines
of work, we report several future research directions that can help us better
understand and mitigate the emerging problem of false information dissemination
on the Web
Overview of the Shared Task on Fake News Detection in Urdu at FIRE 2021
Automatic detection of fake news is a highly important task in the
contemporary world. This study reports the 2nd shared task called
UrduFake@FIRE2021 on identifying fake news detection in Urdu. The goal of the
shared task is to motivate the community to come up with efficient methods for
solving this vital problem, particularly for the Urdu language. The task is
posed as a binary classification problem to label a given news article as a
real or a fake news article. The organizers provide a dataset comprising news
in five domains: (i) Health, (ii) Sports, (iii) Showbiz, (iv) Technology, and
(v) Business, split into training and testing sets. The training set contains
1300 annotated news articles -- 750 real news, 550 fake news, while the testing
set contains 300 news articles -- 200 real, 100 fake news. 34 teams from 7
different countries (China, Egypt, Israel, India, Mexico, Pakistan, and UAE)
registered to participate in the UrduFake@FIRE2021 shared task. Out of those,
18 teams submitted their experimental results, and 11 of those submitted their
technical reports, which is substantially higher compared to the UrduFake
shared task in 2020 when only 6 teams submitted their technical reports. The
technical reports submitted by the participants demonstrated different data
representation techniques ranging from count-based BoW features to word vector
embeddings as well as the use of numerous machine learning algorithms ranging
from traditional SVM to various neural network architectures including
Transformers such as BERT and RoBERTa. In this year's competition, the best
performing system obtained an F1-macro score of 0.679, which is lower than the
past year's best result of 0.907 F1-macro. Admittedly, while training sets from
the past and the current years overlap to a large extent, the testing set
provided this year is completely different
FACTS-ON : Fighting Against Counterfeit Truths in Online social Networks : fake news, misinformation and disinformation
L'évolution rapide des réseaux sociaux en ligne (RSO) représente un défi significatif dans l'identification et l'atténuation des fausses informations, incluant les fausses nouvelles, la désinformation et la mésinformation. Cette complexité est amplifiée dans les environnements numériques où les informations sont rapidement diffusées, nécessitant des stratégies sophistiquées pour différencier le contenu authentique du faux. L'un des principaux défis dans la détection automatique de fausses informations est leur présentation réaliste, ressemblant souvent de près aux faits vérifiables. Cela pose de considérables défis aux systèmes d'intelligence artificielle (IA), nécessitant des données supplémentaires de sources externes, telles que des vérifications par des tiers, pour discerner efficacement la vérité. Par conséquent, il y a une évolution technologique continue pour contrer la sophistication croissante des fausses informations, mettant au défi et avançant les capacités de l'IA.
En réponse à ces défis, ma thèse introduit le cadre FACTS-ON (Fighting Against Counterfeit Truths in Online Social Networks), une approche complète et systématique pour combattre la désinformation dans les RSO. FACTS-ON intègre une série de systèmes avancés, chacun s'appuyant sur les capacités de son prédécesseur pour améliorer la stratégie globale de détection et d'atténuation des fausses informations. Je commence par présenter le cadre FACTS-ON, qui pose les fondements de ma solution, puis je détaille chaque système au sein du cadre :
EXMULF (Explainable Multimodal Content-based Fake News Detection) se concentre sur l'analyse du texte et des images dans les contenus en ligne en utilisant des techniques multimodales avancées, couplées à une IA explicable pour fournir des évaluations transparentes et compréhensibles des fausses informations.
En s'appuyant sur les bases d'EXMULF, MythXpose (Multimodal Content and Social Context-based System for Explainable False Information Detection with Personality Prediction) ajoute une couche d'analyse du contexte social en prédisant les traits de personnalité des utilisateurs des RSO, améliorant la détection et les stratégies d'intervention précoce contre la désinformation.
ExFake (Explainable False Information Detection Based on Content, Context, and External Evidence) élargit encore le cadre, combinant l'analyse de contenu avec des insights du contexte social et des preuves externes. Il tire parti des données d'organisations de vérification des faits réputées et de comptes officiels, garantissant une approche plus complète et fiable de la détection de la désinformation. La méthodologie sophistiquée d'ExFake évalue non seulement le contenu des publications en ligne, mais prend également en compte le contexte plus large et corrobore les informations avec des sources externes crédibles, offrant ainsi une solution bien arrondie et robuste pour combattre les fausses informations dans les réseaux sociaux en ligne.
Complétant le cadre, AFCC (Automated Fact-checkers Consensus and Credibility) traite l'hétérogénéité des évaluations des différentes organisations de vérification des faits. Il standardise ces évaluations et évalue la crédibilité des sources, fournissant une évaluation unifiée et fiable de l'information.
Chaque système au sein du cadre FACTS-ON est rigoureusement évalué pour démontrer son efficacité dans la lutte contre la désinformation sur les RSO. Cette thèse détaille le développement, la mise en œuvre et l'évaluation complète de ces systèmes, soulignant leur contribution collective au domaine de la détection des fausses informations. La recherche ne met pas seulement en évidence les capacités actuelles dans la lutte contre la désinformation, mais prépare également le terrain pour de futures avancées dans ce domaine critique d'étude.The rapid evolution of online social networks (OSN) presents a significant challenge in identifying and mitigating false information, which includes Fake News, Disinformation, and Misinformation. This complexity is amplified in digital environments where information is quickly disseminated, requiring sophisticated strategies to differentiate between genuine and false content. One of the primary challenges in automatically detecting false information is its realistic presentation, often closely resembling verifiable facts. This poses considerable challenges for artificial intelligence (AI) systems, necessitating additional data from external sources, such as third-party verifications, to effectively discern the truth. Consequently, there is a continuous technological evolution to counter the growing sophistication of false information, challenging and advancing the capabilities of AI.
In response to these challenges, my dissertation introduces the FACTS-ON framework (Fighting Against Counterfeit Truths in Online Social Networks), a comprehensive and systematic approach to combat false information in OSNs. FACTS-ON integrates a series of advanced systems, each building upon the capabilities of its predecessor to enhance the overall strategy for detecting and mitigating false information. I begin by introducing the FACTS-ON framework, which sets the foundation for my solution, and then detail each system within the framework:
EXMULF (Explainable Multimodal Content-based Fake News Detection) focuses on analyzing both text and image in online content using advanced multimodal techniques, coupled with explainable AI to provide transparent and understandable assessments of false information.
Building upon EXMULF’s foundation, MythXpose (Multimodal Content and Social Context-based System for Explainable False Information Detection with Personality Prediction) adds a layer of social context analysis by predicting the personality traits of OSN users, enhancing the detection and early intervention strategies against false information.
ExFake (Explainable False Information Detection Based on Content, Context, and External Evidence) further expands the framework, combining content analysis with insights from social context and external evidence. It leverages data from reputable fact-checking organizations and official social accounts, ensuring a more comprehensive and reliable approach to the detection of false information. ExFake's sophisticated methodology not only evaluates the content of online posts but also considers the broader context and corroborates information with external, credible sources, thereby offering a well-rounded and robust solution for combating false information in online social networks.
Completing the framework, AFCC (Automated Fact-checkers Consensus and Credibility) addresses the heterogeneity of ratings from various fact-checking organizations. It standardizes these ratings and assesses the credibility of the sources, providing a unified and trustworthy assessment of information.
Each system within the FACTS-ON framework is rigorously evaluated to demonstrate its effectiveness in combating false information on OSN. This dissertation details the development, implementation, and comprehensive evaluation of these systems, highlighting their collective contribution to the field of false information detection. The research not only showcases the current capabilities in addressing false information but also sets the stage for future advancements in this critical area of study
Recommended from our members
Surmounting the Verification Barrier Between the Field of Professional Human Rights: Fact-Finding and the Non-Field of Digital Civilian Witnessing
This is the author accepted manuscript. The final version is available in "Produsing Theory in a Digital World 2.0: The Intersection of Audiences and Production in Contemporary Theory. Volume 2"This work was supported by the Economic and Social Research Council (grant number ES/K009850/1) and by the Isaac Newton Trust
Issues of Fact-based Information Analysis
With the recent growth of Internet, mobile and social networks the spread of fake news and click-baits increases drastically. Today, the fact retrieval system is one of the most effective tools for identifying the information for decision-making. We propose the approach based on factual information systematization. Different interpretations of the same phenomenon, as well as the inconsistency, inaccuracy or mismatch in information coming from different sources, lead to the task of factual information extraction. In this work, we explore how can natural language processing methods help to check contradictions and mismatches in facts automatically. The reference model of the factbased analytical system is proposed. It consists of such basic components as Document Search component, Fact retrieval component, Fact Analysis component, Visualization component, and Control component
Fact-checking Literacy of Covid-19 Infodemic on Social Media in Indonesia
The massive spread of information about COVID-19 hoaxes since 2020 is a problem that any country, including Indonesia, must face. In anticipation of this, the West Java government conducted a fact-checking process using multiplatform Instagram @jabarsaberhoaks. This study aims to identify the fact-checking process that has been carried out by @jabarsaberhoaks accounts using the content analysis method of identifying 334 posts about COVID-19 hoaxes during 2020. The results showed that @jabarsaberhoaks conducted the fact-check by finding data from various hoax themes by checking official sources such as media, authorized agencies, and expert sources nationally and internationally. A note of correction of the clarification results from COVID-19 was published on Instagram. This research finding necessitates stakeholders intensify their campaigns and educate the community on COVID-19 information literacy to understand and respond to the COVID-19 phenomenon adequately.
- …