18 research outputs found
Sentiment Analysis for Fake News Detection
[Abstract] In recent years, we have witnessed a rise in fake news, i.e., provably false pieces of information created with the intention of deception. The dissemination of this type of news poses a serious threat to cohesion and social well-being, since it fosters political polarization and the distrust of people with respect to their leaders. The huge amount of news that is disseminated through social media makes manual verification unfeasible, which has promoted the design and implementation of automatic systems for fake news detection. The creators of fake news use various stylistic tricks to promote the success of their creations, with one of them being to excite the sentiments of the recipients. This has led to sentiment analysis, the part of text analytics in charge of determining the polarity and strength of sentiments expressed in a text, to be used in fake news detection approaches, either as a basis of the system or as a complementary element. In this article, we study the different
uses of sentiment analysis in the detection of fake news, with a discussion of the most relevant elements and shortcomings, and the requirements that should be met in the near future, such as multilingualism, explainability, mitigation of biases, or treatment of multimedia elements.Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C 2020/11This work has been funded by FEDER/Ministerio de Ciencia, Innovación y Universidades — Agencia Estatal de Investigación through the ANSWERASAP project (TIN2017-85160-C2-1-R); and by Xunta de Galicia through a Competitive Reference Group grant (ED431C 2020/11). CITIC, as Research Center of the Galician University System, is funded by the Consellería de Educación, Universidade e Formación Profesional of the Xunta de Galicia through the European Regional Development Fund (ERDF/FEDER) with 80%, the Galicia ERDF 2014-20 Operational Programme, and the remaining 20% from the Secretaría Xeral de Universidades (ref. ED431G 2019/01). David Vilares is also supported by a 2020 Leonardo Grant for Researchers and Cultural Creators from the BBVA Foundation. Carlos Gómez-Rodríguez has also received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant No. 714150
Recommended from our members
Detecting Influencers in Social Media Discussions
In the past decade we have been privileged to witness the creation and revolution of social media on the World Wide Web. The abundance of content available on the web allows us to analyze the way people interact and the roles they play in a conversation on a large scale. One such role is influencer in the conversation. Detecting influence can be useful for successful advertisement strategies, detecting terrorist leaders and political campaigning.
We explore influence in discussion forums, weblogs, and micro-blogs using several components that have been found to be indicators of influence. Our components are author traits, agreement, claims, argumentation, persuasion, credibility, and certain dialog patterns. In the first portion of this thesis we describe each of our system components. Each of these components is motivated by social science through Robert Cialdini’s “Weapons of Influence” [Cialdini, 2007]. The weapons of influence are Reciprocation, Commitment and Consistency, Social Proof, Liking, Authority, and Scarcity. We then show the method and experiments for classifying each component.
In the second part of this thesis we classify influencers across five online genres and analyze which features are most indicative of influencers in each genre. The online genres we explore are Wikipedia Talk Pages, LiveJournal weblogs, Political Forum discussions, Create Debate debate discussions, and Twitter microblog conversations. First, we describe a rich suite of features that were generated using each of the system components. Then, we describe our experiments and results including using domain adaptation to exploit the data from multiple online genres. Finally, we also provide a detailed analysis of a single weapon of influence, social proof, and its impact in detecting influence in Wikipedia Talk Pages. This provides a single example of the usefulness of providing comprehensive components in the detection of influence.
The contributions of this thesis include a system for predicting who the influencers are in online discussion forums. We provide an evaluation of a rich set of features inspired by social science. In our system, each feature set used to detect influence is complex and computed by a system component. This allows us to provide a detailed analysis as to why the person was chosen as an influencer. We also provide a comparison of differences across several online discussion datasets and exploit the differences across the different genres to provide further improvements in influence detection
Credibility analysis of textual claims with explainable evidence
Despite being a vast resource of valuable information, the Web has been polluted by the spread of false claims. Increasing hoaxes, fake news, and misleading information on the Web have given rise to many fact-checking websites that manually assess these doubtful claims. However, the rapid speed and large scale of misinformation spread have become the bottleneck for manual verification. This calls for credibility assessment tools that can automate this verification process. Prior works in this domain make strong assumptions about the structure of the claims and the communities where they are made. Most importantly, black-box techniques proposed in prior works lack the ability to explain why a certain statement is deemed credible or not. To address these limitations, this dissertation proposes a general framework for automated credibility assessment that does not make any assumption about the structure or origin of the claims. Specifically, we propose a feature-based model, which automatically retrieves relevant articles about the given claim and assesses its credibility by capturing the mutual interaction between the language style of the relevant articles, their stance towards the claim, and the trustworthiness of the underlying web sources. We further enhance our credibility assessment approach and propose a neural-network-based model. Unlike the feature-based model, this model does not rely on feature engineering and external lexicons. Both our models make their assessments interpretable by extracting explainable evidence from judiciously selected web sources.
We utilize our models and develop a Web interface, CredEye, which enables users to automatically assess the credibility of a textual claim and dissect into the assessment by browsing through judiciously and automatically selected evidence snippets. In addition, we study the problem of stance classification and propose a neural-network-based model for predicting the stance of diverse user perspectives regarding the controversial claims. Given a controversial claim and a user comment, our stance classification model predicts whether the user comment is supporting or opposing the claim.Das Web ist eine riesige Quelle wertvoller Informationen, allerdings wurde es durch die Verbreitung von Falschmeldungen verschmutzt. Eine zunehmende Anzahl an Hoaxes, Falschmeldungen und irreführenden Informationen im Internet haben viele Websites hervorgebracht, auf denen die Fakten überprüft und zweifelhafte Behauptungen manuell bewertet werden. Die rasante Verbreitung großer Mengen von Fehlinformationen sind jedoch zum Engpass für die manuelle Überprüfung geworden. Dies erfordert Tools zur Bewertung der Glaubwürdigkeit, mit denen dieser Überprüfungsprozess automatisiert werden kann. In früheren Arbeiten in diesem Bereich werden starke Annahmen gemacht über die Struktur der Behauptungen und die Portale, in denen sie gepostet werden. Vor allem aber können die Black-Box-Techniken, die in früheren Arbeiten vorgeschlagen wurden, nicht erklären, warum eine bestimmte Aussage als glaubwürdig erachtet wird oder nicht. Um diesen Einschränkungen zu begegnen, wird in dieser Dissertation ein allgemeines Framework für die automatisierte Bewertung der Glaubwürdigkeit vorgeschlagen, bei dem keine Annahmen über die Struktur oder den Ursprung der Behauptungen gemacht werden. Insbesondere schlagen wir ein featurebasiertes Modell vor, das automatisch relevante Artikel zu einer bestimmten Behauptung abruft und deren Glaubwürdigkeit bewertet, indem die gegenseitige Interaktion zwischen dem Sprachstil der relevanten Artikel, ihre Haltung zur Behauptung und der Vertrauenswürdigkeit der zugrunde liegenden Quellen erfasst wird. Wir verbessern unseren Ansatz zur Bewertung der Glaubwürdigkeit weiter und schlagen ein auf neuronalen Netzen basierendes Modell vor. Im Gegensatz zum featurebasierten Modell ist dieses Modell nicht auf Feature-Engineering und externe Lexika angewiesen. Unsere beiden Modelle machen ihre Einschätzungen interpretierbar, indem sie erklärbare Beweise aus sorgfältig ausgewählten Webquellen extrahieren. Wir verwenden unsere Modelle zur Entwicklung eines Webinterfaces, CredEye, mit dem Benutzer die Glaubwürdigkeit einer Behauptung in Textform automatisch bewerten und verstehen können, indem sie automatisch ausgewählte Beweisstücke einsehen. Darüber hinaus untersuchen wir das Problem der Positionsklassifizierung und schlagen ein auf neuronalen Netzen basierendes Modell vor, um die Position verschiedener Benutzerperspektiven in Bezug auf die umstrittenen Behauptungen vorherzusagen. Bei einer kontroversen Behauptung und einem Benutzerkommentar sagt unser Einstufungsmodell voraus, ob der Benutzerkommentar die Behauptung unterstützt oder ablehnt
Recommended from our members
From Language to the Real World: Entity-Driven Text Analytics
This study focuses on the modeling of the underlying structured semantic information in natural language text to predict real world phenomena. The thesis of this work is that a general and uniform representation of linguistic information that combines multiple levels, such as semantic frames and roles, syntactic dependency structure, lexical items and their sentiment values, can support challenging classification tasks for NLP problems. The hypothesis behind this work is that it is possible to generate a document representation using more complex data structures, such as trees and graphs, to distinguish the depicted scenarios and semantic roles of the entity mentions in text, which can facilitate text mining tasks by exploiting the deeper semantic information. The testbed for the document representation is entity-driven text analytics, a recent area of active research where large collection of documents are analyzed to study and make predictions about real world outcomes of the entity mentions in text, with the hypothesis that the prediction will be more successful if the representation can capture not only the actual words and grammatical structures but also the underlying semantic generalizations encoded in frame semantics, and the dependency relations among frames and words.
The main contribution of this study includes the demonstration of the benefits of frame semantic features and how to use them in document representation. Novel tree and graph structured representations are proposed to model mentioned entities by incorporating different levels of linguistic information, such as lexical items, syntactic dependencies, and semantic frames and roles. For machine learning on graphs, we proposed a Node Edge Weighting graph kernel that allows a recursive computation on the substructures of graphs, which explores an exponential number of subgraphs for fine-grained feature engineering. We demonstrate the effectiveness of our model to predict price movement of companies in different market sectors solely based on financial news. Based on a comprehensive comparison between different structures of document representation and their corresponding learning methods, e.g. vector, tree and graph space model, we found that the application of a rich semantic feature learning on trees and graphs can lead to high prediction accuracy and interpretable features for problem understanding.
Two key questions motivate this study: (1) Can semantic parsing based on frame semantics, a lexical conceptual representation that captures underlying semantic similarities (scenarios) across different forms, be exploited for prediction tasks where information is derived from large scale document collections? (2) Given alternative data structures to represent the underlying meaning captured in frame semantics, which data structure will be most effective? To address (1), sentences that have dependency parses and frame semantic parses, and specialized lexicons that incorporate aspects of sentiment in words, will be used to generate representations that include individual lexical items, sentiment of lexical items, semantic frames and roles, syntactic dependency information and other structural relations among words and phrases within the sentence. To address (2), we incorporate the information derived from semantic frame parsing, dependency parsing, and specialized lexicons into vector space, tree space and graph space representations, and kernel methods for the corresponding data structures are used for SVM (support vector machine) learning to compare their predictive power.
A vector space model beyond bag-of-words is first presented. It is based on a combination of semantic frame attributes, n-gram lexical items, and part-of-speech specific words weighted by a psycholinguistic dictionary. The second model encompasses a semantic tree representation that encodes the relations among semantic frame features and, in particular, the roles of the entity mentions in text. It depends on tree kernel functions for machine learning. The third is a semantic graph model that provides a concise and convenient representation of linguistic semantic information. It subsumes the vector space model and the semantic tree model by using a graph data structure for a unified representation for semantic frames, lexical items, and syntactic dependency relations derived from frame parses and dependency parses of sentences.
The general goal of this study is to ground information derived from NLP techniques to textual datasets in real world observations, where natural language semantics is used as a means to learn the semantic relations that are important in the domain, to understand what is relevant for objectives of interest of the practitioner. Experiments are conducted in a financial domain to investigate whether our computational linguistic methodologies applied to large-scale analysis of financial news can improve the understanding of a company's fundamental market value, and whether linguistic information derived from news produces a consistent enough result to benefit more comprehensive financial models. Stock price data is aligned with news articles. Two kinds of labels are assigned: the existence of a price change and the direction of change. The change in price and polarity tasks are formulated as binary classification problems and bipartite ranking problems. Using the bag-of-words model and the proposed vector-space-model as benchmarks, the experiments show a significant improvement from the use of the semantic tree model. The semantic graph model with more expressive power outperforms both the vector space model and the tree space model. At best, there may be a weak predictive effect of news on price for a particular data instance, which is, for example, a company on a date, due to the fluctuation in uncertainty of financial market and the efficient market hypothesis. However, the proposed models and their outputs can provide useful information to guide financial market price prediction and to help business analysts discover potential investment opportunities. These advantages come from the rich expressive power of the semantic tree model and the semantic graph space model, since the models are able to learn the semantic relations that are important in the problem domain, and effectively discover the useful underlying structured semantic information from large-scale textual data
EVALITA Evaluation of NLP and Speech Tools for Italian Proceedings of the Final Workshop
Editor of the proceedings of EVALITA 2016
Sentiment analysis of patient feedback
The application of sentiment analysis as a method for the automatic categorisation of opinions in text has grown increasingly popular across a number of domains over the past few years. In particular, health services have started to consider sentiment analysis as a solution for the task of processing the ever-growing amount of feedback that is received in regards to patient care. However, the domain is relatively under-studied in regards to the application of the technology, and the effectiveness and performance of methods have not been substantially demonstrated.
Beginning with a survey of sentiment analysis and an examination of the work undertaken so far in the clinical domain, this thesis examines the application of supervised machine learning models to the classification of sentiment in patient feedback. As a starting point, this requires a suitably annotated patient feedback dataset, for both analysis and experimentation. Following the construction and detailed analysis of such a resource, a series of machine learning experiments study the impact of different models, features and review types to the problem. These experiments examine the applicability of the selected methods and demonstrate that model and feature choice may not be a significant issue in sentiment classification, whereas the type of review that the models train and test across does affect the outcome of classification. Finally, by examining the role that responses play in the patient feedback process and developing the idea of incorporating the inter-document context provided by the response into the feedback classification process, a recalibration framework for [continued…
Macro-micro approach for mining public sociopolitical opinion from social media
During the past decade, we have witnessed the emergence of social media, which has prominence as a means for the general public to exchange opinions towards a broad range of topics. Furthermore, its social and temporal dimensions make it a rich resource for policy makers and organisations to understand public opinion. In this thesis, we present our research in understanding public opinion on Twitter along three dimensions: sentiment, topics and summary.
In the first line of our work, we study how to classify public sentiment on Twitter. We focus on the task of multi-target-specific sentiment recognition on Twitter, and propose an approach which utilises the syntactic information from parse-tree in conjunction with the left-right context of the target. We show the state-of-the-art performance on two datasets including a multi-target Twitter corpus on UK elections which we make public available for the research community. Additionally we also conduct two preliminary studies including cross-domain emotion classification on discourse around arts and cultural experiences, and social spam detection to improve the signal-to-noise ratio of our sentiment corpus.
Our second line of work focuses on automatic topical clustering of tweets. Our aim is to group tweets into a number of clusters, with each cluster representing a meaningful topic, story, event or a reason behind a particular choice of sentiment. We explore various ways of tackling this challenge and propose a two-stage hierarchical topic modelling system that is efficient and effective in achieving our goal.
Lastly, for our third line of work, we study the task of summarising tweets on common topics, with the goal to provide informative summaries for real-world events/stories or explanation underlying the sentiment expressed towards an issue/entity. As most existing tweet summarisation approaches rely on extractive methods, we propose to apply state-of-the-art neural abstractive summarisation model for tweets. We also tackle the challenge of cross-medium supervised summarisation with no target-medium training resources. To the best of our knowledge, there is no existing work on studying neural abstractive summarisation on tweets. In addition, we present a system for providing interactive visualisation of topic-entity sentiments and the corresponding summaries in chronological order.
Throughout our work presented in this thesis, we conduct experiments to evaluate and verify the effectiveness of our proposed models, comparing to relevant baseline methods. Most of our evaluations are quantitative, however, we do perform qualitative analyses where it is appropriate. This thesis provides insights and findings that can be used for better understanding public opinion in social media
FINE-GRAINED EMOTION DETECTION IN MICROBLOG TEXT
Automatic emotion detection in text is concerned with using natural language processing techniques to recognize emotions expressed in written discourse. Endowing computers with the ability to recognize emotions in a particular kind of text, microblogs, has important applications in sentiment analysis and affective computing. In order to build computational models that can recognize the emotions represented in tweets we need to identify a set of suitable emotion categories. Prior work has mainly focused on building computational models for only a small set of six basic emotions (happiness, sadness, fear, anger, disgust, and surprise). This thesis describes a taxonomy of 28 emotion categories, an expansion of these six basic emotions, developed inductively from data. This set of 28 emotion categories represents a set of fine-grained emotion categories that are representative of the range of emotions expressed in tweets, microblog posts on Twitter.
The ability of humans to recognize these fine-grained emotion categories is characterized using inter-annotator reliability measures based on annotations provided by expert and novice annotators. A set of 15,553 human-annotated tweets form a gold standard corpus, EmoTweet-28. For each emotion category, we have extracted a set of linguistic cues (i.e., punctuation marks, emoticons, emojis, abbreviated forms, interjections, lemmas, hashtags and collocations) that can serve as salient indicators for that emotion category.
We evaluated the performance of automatic classification techniques on the set of 28 emotion categories through a series of experiments using several classifier and feature combinations. Our results shows that it is feasible to extend machine learning classification to fine-grained emotion detection in tweets (i.e., as many as 28 emotion categories) with results that are comparable to state-of-the-art classifiers that detect six to eight basic emotions in text. Classifiers using features extracted from the linguistic cues associated with each category equal or better the performance of conventional corpus-based and lexicon-based features for fine-grained emotion classification.
This thesis makes an important theoretical contribution in the development of a taxonomy of emotion in text. In addition, this research also makes several practical contributions, particularly in the creation of language resources (i.e., corpus and lexicon) and machine learning models for fine-grained emotion detection in text
Empirical Evaluation Methodology for Target Dependent Sentiment Analysis
The area of sentiment analysis has been around for at least 20 years in one form or another. In which time, it has had many and varied applications ranging from predicting film successes to social media analytics, and it has gained widespread use via selling it as a tool through application programming interfaces. The focus of this thesis is not on the application side but rather on novel evaluation methodology for the most fine grained form of sentiment analysis, target dependent sentiment analysis (TDSA). TDSA has seen a recent upsurge but to date most research only evaluates on very similar datasets which limits the conclusions that can be drawn from it. Further, most research only marginally improves results, chasing the State Of The Art (SOTA), but these prior works cannot empirically show where their improvements come from beyond overall metrics and small qualitative examples. By performing an extensive literature review on the different granularities of sentiment analysis, coarse (document level) to fine grained, a new and extended definition of fine grained sentiment analysis, the hextuple, is created which removes ambiguities that can arise from the context. In addition, examples from the literature will be provided where studies are not able to be replicated nor reproduced. This thesis includes the largest empirical analysis on six English datasets across multiple existing neural and non-neural methods, allowing for the methods to be tested for generalisability. In performing these experiments factors such as dataset size and sentiment class distribution determine whether neural or non-neural approaches are best, further finding that no method is generalisable. By formalising, analysing, and testing prior TDSA error splits, newly created error splits, and a new TDSA specific metric, a new empirical evaluation methodology has been created for TDSA. This evaluation methodology is then applied to multiple case studies to empirically justify improvements, such as position encoding, and show how contextualised word representation improves TDSA methods. From the first reproduction study in TDSA, it is believed that random seeds significantly affecting the neural method is the reason behind the difficulty in reproducing or replicating the original study results. Thus highlighting empirically for the first in TDSA the need for reporting multiple run results for neural methods, to allow for better reporting and improved evaluation. This thesis is fully reproducible through the codebases and Jupyter notebooks referenced, making it an executable thesis