10 research outputs found
Sarcasm recognition survey and application based on Reddit comments
Social media platforms are continuously increasing their number of users, and every day enormous amounts of data are produced online. Machine Learning (ML) techniques in the form of speech recognition are applied to analyze the polarity of this unstructured text data.
However, it is broadly used sarcasm through these platforms, reducing the accuracy of said systems, as the intention of the message expressed does not match the polarity that is measured.
Throughout the development of this work a survey considering three different algorithms will be performed. These algorithms are Logistic Regression, Neural Networks and Support Vector Machines.
This final degree project proposes a previous analysis to the data using a sarcasm recognition classifier implemented with a support vector machine algorithm, with a mean accuracy of 71.21% and an F1-Score around 60%.
Finally, an analysis of the planification and the costs is performed, proposing future works that could complement this bachelor thesis.Ingeniería de la Energí
Sentiment analysis on twitter for the portuguese language
Dissertação para obtenção do Grau de Mestre em
Engenharia InformáticaWith the growth and popularity of the internet and more specifically of social networks, users can more easily share their thoughts, insights and experiences with others.
Messages shared via social networks provide useful information for several applications,
such as monitoring specific targets for sentiment or comparing the public sentiment on several targets, avoiding the traditional marketing research method with the use of surveys to explicitly get the public opinion. To extract information from the large amounts of messages that are shared, it is best to use an automated program to process these messages.
Sentiment analysis is an automated process to determine the sentiment expressed in
natural language in text. Sentiment is a broad term, but here we are focussed in opinions and emotions that are expressed in text. Nowadays, out of the existing social network websites, Twitter is considered the best one for this kind of analysis. Twitter allows users to share their opinion on several topics and entities, by means of short messages. The messages may be malformed and contain spelling errors, therefore some treatment of the text may be necessary before the analysis, such as spell checks.
To know what the message is focusing on it is necessary to find these entities on the
text such as people, locations, organizations, products, etc. and then analyse the rest of the text and obtain what is said about that specific entity. With the analysis of several messages, we can have a general idea on what the public thinks regarding many different entities.
It is our goal to extract as much information concerning different entities from tweets in the Portuguese language. Here it is shown different techniques that may be used as well as examples and results on state-of-the-art related work. Using a semantic approach, from these messages we were able to find and extract named entities and assigning sentiment values for each found entity, producing a complete tool competitive with existing solutions. The sentiment classification and assigning to entities is based on the grammatical construction of the message. These results are then used to be viewed by the user in real time or stored to be viewed latter. This analysis provides ways to view and compare the public sentiment regarding these entities, showing the favourite brands, companies and people, as well as showing the growth of the sentiment over time
Sentiment classification with case-base approach
L'augmentation de la croissance des réseaux, des blogs et des utilisateurs des sites d'examen sociaux font d'Internet une énorme source de données, en particulier sur la façon dont les gens pensent, sentent et agissent envers différentes questions. Ces jours-ci, les opinions des gens jouent un rôle important dans la politique, l'industrie, l'éducation, etc. Alors, les gouvernements, les grandes et petites industries, les instituts universitaires, les entreprises et les individus cherchent à étudier des techniques automatiques fin d’extraire les informations dont ils ont besoin dans les larges volumes de données. L’analyse des sentiments est une véritable réponse à ce besoin. Elle est une application de traitement du langage naturel et linguistique informatique qui se compose de techniques de pointe telles que l'apprentissage machine et les modèles de langue pour capturer les évaluations positives, négatives ou neutre, avec ou sans leur force, dans des texte brut. Dans ce mémoire, nous étudions une approche basée sur les cas pour l'analyse des sentiments au niveau des documents. Notre approche basée sur les cas génère un classificateur binaire qui utilise un ensemble de documents classifies, et cinq lexiques de sentiments différents pour extraire la polarité sur les scores correspondants aux commentaires. Puisque l'analyse des sentiments est en soi une tâche dépendante du domaine qui rend le travail difficile et coûteux, nous appliquons une approche «cross domain» en basant notre classificateur sur les six différents domaines au lieu de le limiter à un seul domaine. Pour améliorer la précision de la classification, nous ajoutons la détection de la négation comme une partie de notre algorithme. En outre, pour améliorer la performance de notre approche, quelques modifications innovantes sont appliquées. Il est intéressant de mentionner que notre approche ouvre la voie à nouveaux développements en ajoutant plus de lexiques de sentiment et ensembles de données à l'avenir.Increasing growth of the social networks, blogs, and user review sites make Internet a huge source of data especially about how people think, feel, and act toward different issues. These days, people opinions play an important role in the politic, industry, education, etc. Thus governments, large and small industries, academic institutes, companies, and individuals are looking for investigating automatic techniques to extract their desire information from large amount of data. Sentiment analysis is one true answer to this need. Sentiment analysis is an application of natural language processing and computational linguistic that consists of advanced techniques such as machine learning and language model approaches to capture the evaluative factors such as positive, negative, or neutral, with or without their strength, from plain texts. In this thesis we study a case-based approach on cross-domain for sentiment analysis on the document level. Our case-based algorithm generates a binary classifier that uses a set of the processed cases, and five different sentiment lexicons to extract the polarity along the corresponding scores from the reviews. Since sentiment analysis inherently is a domain dependent task that makes it problematic and expensive work, we use a cross-domain approach by training our classifier on the six different domains instead of limiting it to one domain. To improve the accuracy of the classifier, we add negation detection as a part of our algorithm. Moreover, to improve the performance of our approach, some innovative modifications are applied. It is worth to mention that our approach allows for further developments by adding more sentiment lexicons and data sets in the future
Content-aware Location Inference and Misinformation in Online Social Networks
Location inference is of potential use in the area of cybercrime prevention and misinformation
detection. Inferring locations from user texts in Online Social Networks
(OSN) is a non-trivial and challenging problem with regards to public safety. This work
proposes LOCINFER - a novel non-uniform grid-based approach for location inference
from Twitter messages using Quadtree spatial partitions. The proposed algorithm
uses natural language processing (NLP) for semantic understanding and incorporates
hybrid similarity measures for feature vector extraction and dimensionality reduction.
LOCINFER addresses the sparsity problem which may be associated with training data
following a biased clustering approach where densely populated regions within the data
are partitioned into larger grids. The clustered grids are then classi�ed using a logistic
regression model. The proposed method performed better than the state-of-the art in
grid-based content-only location inference by more than 150km in Average Error Distance
(AED) and almost 300km in Median Error Distance (MED). It also performed
better than by 24% in terms of accuracy at 161km. It was 400km better in prediction
for MED and 250km better in terms of AED.
Also proposed is SENTDETECT - a technique that detects and classi�es fake news
messages from Twitter posts using extensive experiments with machine learning and
deep learning models including those without prior knowledge of the domain. Following
a text-only approach, SENTDETECT utilises an additional feature of the word
sentiments alongside the original text of the messages. Incorporating these engineered
features into the feature vector, provides an enrichment of the vector space prior to
the deep learning classi�cation task which utilised a Hierarchical Attention Networks
(HAN) in pre-trained word embedding.
An emotional word ratio (EMORATIO) was deduced following the discovery of a positive
relationship between negative emotional words and fake news posts. Finally, the
work aimed to perform automatic detection of misinformation posts and rumors. A
lot of work has been done in the area of detecting the truthfulness or veracity of posts
from OSN messages. This work presents a novel feature-augmented approach using
both text and sentiments in enriching features used during prediction. The end result
performed better by up to 40% in Recall and F-Measure over the state of the art on
benchmark misinformation PHEME dataset which relied on textual features only. The
blend of location inference with misinformation detection provides an e�ective tool
in the �ght against vices on social media such as curtailing hate speech propagation,
cyberbullying and fake news posts
Event identification in social media using classification-clustering framework
In recent years, there has been increased interest in real-world event detection using publicly accessible data made available through Internet technology such as Twitter, Facebook and YouTube. In these highly interactive systems the general public are able to post real-time reactions to “real world" events - thereby acting as social sensors of terrestrial activity. Automatically detecting and categorizing events, particularly smallscale incidents, using streamed data is a non-trivial task, due to the heterogeneity, the scalability and the varied quality of the data as well as the presence of noise and irrelevant information. However, it would be of high value to public safety organisations such as local police, who need to respond accordingly. To address these challenges we present an end-to-end integrated event detection framework which comprises five main components: data collection, pre-processing, classification, online clustering and summarization. The integration between classification and clustering enables events to be detected, especially “disruptive events" - incidents that threaten social safety and security, or that could disrupt social order. We present an evaluation of the effectiveness of detecting events using a variety of features derived from Twitter posts, namely: temporal, spatial and textual content. We evaluate our framework on large-scale, realworld datasets from Twitter and Flickr. Furthermore, we apply our event detection system to a large corpus of tweets posted during the August 2011 riots in England. We show that our system can perform as well as terrestrial sources, such as police reports, traditional surveillance, and emergency calls, even better than local police intelligence in most cases. The framework developed in this thesis provides a scalable, online solution, to handle the high volume of social media documents in different languages including English, Arabic, Eastern languages such as Chinese, and many Latin languages.
Moreover, event detection is a concept that is crucial to the assurance of public safety surrounding real-world events. Decision makers use information from a range of terrestrial and online sources to help inform decisions that enable them to develop policies and react appropriately to events as they unfold. Due to the heterogeneity and scale of the data and the fact that some messages are more salient than others for the purposes of understanding any risk to human safety and managing any disruption caused by events, automatic summarization of event-related microblogs is a non-trivial and important problem. In this thesis we tackle the task of automatic summarization of Twitter posts, and present three methods that produce summaries by selecting the most representative posts from real-world tweet-event clusters. To evaluate our approaches, we compare them to the state-of-the-art summarization systems and human generated summaries. Our results show that our proposed methods outperform all the other summarization systems for English and non-English corpora
Expressions of psychological stress on Twitter: detection and characterisation
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Long-term psychological stress is a significant predictive factor for individual mental health and short-term stress is a useful indicator of an immediate problem. Traditional psychology studies have relied on surveys to understand reasons for stress in general and in specific contexts. The popularity and ubiquity of social media make it a potential data source for identifying and characterising aspects of stress. Previous studies of stress in social media have focused on users responding to stressful personal life events. Prior social media research has not explored expressions of stress in other important domains, however, including travel and politics.
This thesis detects and analyses expressions of psychological stress in social media. So far, TensiStrength is the only existing lexicon for stress and relaxation scores in social media. Using a word-vector based word sense disambiguation method, the TensiStrength lexicon was modified to include the stress scores of the different senses of the same word. On a dataset of 1000 tweets containing ambiguous stress-related words, the accuracy of the modified TensiStrength increased by 4.3%.
This thesis also finds and reports characteristics of a multiple-domain stress dataset of 12000 tweets, 3000 each for airlines, personal events, UK politics, and London traffic. A two-step method for identifying stressors in tweets was implemented. The first step used LDA topic modelling and k-means clustering to find a set of types of stressors (e.g., delay, accident). Second, three word-vector based methods - maximum-word similarity, context-vector similarity, and cluster-vector similarity - were used to detect the stressors in each tweet. The cluster vector similarity method was found to identify the stressors in tweets in all four domains better than machine learning classifiers, based on the performance metrics of accuracy, precision, recall, and f-measure.
Swearing and sarcasm were also analysed in high-stress and no-stress datasets from the four domains using a Convolutional Neural Network and Multilayer Perceptron, respectively. The presence of swearing and sarcasm was higher in the high-stress tweets compared to no-stress tweets in all the domains. The stressors in each domain with higher percentages of swearing or sarcasm were identified. Furthermore, the distribution of the temporal classes (past, present, future, and atemporal) in high-stress tweets was found using an ensemble classifier. The distribution depended on the domain and the stressors.
This study contributes a modified and improved lexicon for the identification of stress scores in social media texts. The two-step method to identify stressors follows a general framework that can be used for domains other than those which were studied. The presence of swearing, sarcasm, and the temporal classes of high-stress tweets belonging to different domains are found and compared to the findings from traditional psychology, for the first time. The algorithms and knowledge may be useful for travel, political, and personal life systems that need to identify stressful events in order to take appropriate action.European Union's Horizon 2020 research and innovation programme under grant agreement No 636160-2, the Optimum project (www.optimumproject.eu)
Recommended from our members
Improving and Understanding Deep Models for Natural Language Comprehension
Natural Language Comprehension is a challenging domain of Natural Language Processing. To improve a model’s language comprehension/understanding, one approach would be to enrich the structure of the model to enhance its capability in learning the latent rules of the language.
In this dissertation, we will first introduce several deep models for a variety of natural language comprehension tasks including natural language inference and question answering. Previous approaches employ reading mechanisms that do not fully exploit the interdependencies between the input sources like “premise and hypothesis” or “document and query”. In contrast, we explore more sophisticated reading mechanisms to efficiently model the relationships between input sources (e.g. “premise and hypothesis” or “document and query”). These mechanisms and models yield better empirical performances, however, due to the black-box nature of deep learning, it is difficult to assess whether the improved models indeed acquire a better understanding of language. Meanwhile, data is often plagued by meaningless or even harmful statistical biases and deep models might achieve high performance by focusing on the biases. This motivates us to study methods for “peaking inside” the black-box deep models to provide explanation and understanding of the models’ behavior. The proposed method (a.k.a. saliency) takes a step toward explaining deep learning-based models based on gradient of the model output with respect to different components like the input layer and inter-mediate layers. Saliency reveals interesting insights and identifies critical information contributing to the model decisions. Besides proposing a model-agnostic interpretation method (saliency), we study model-dependent interpretation solutions and propose two interpretable designs and structures. Finally, we introduce a novel mechanism (saliency learning), which learns from ground-truth explanation signal such that the learned model will not only make the right prediction but also for the right reason. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed methods, which produce more faithful to right reasons and evidences predictions while delivering better results compared to traditionally trained models