291 research outputs found
Recommended from our members
On Semantics and Deep Learning for Event Detection in Crisis Situations
In this paper, we introduce Dual-CNN, a semantically-enhanced deep learning model to target the problem of event detection in crisis situations from social media data. A layer of semantics is added to a traditional Convolutional Neural Network (CNN) model to capture the contextual information that is generally scarce in short, ill-formed social media messages. Our results show that our methods are able to successfully identify the existence of events, and event types (hurricane, floods, etc.) accurately (> 79% F-measure), but the performance of the model significantly drops (61% F-measure) when identifying fine-grained event-related information (affected individuals, damaged infrastructures, etc.). These results are competitive with more traditional Machine Learning models, such as SVM
EdinburghNLP at WNUT-2020 Task 2: Leveraging Transformers with Generalized Augmentation for Identifying Informativeness in COVID-19 Tweets
Twitter has become an important communication channel in times of emergency.
The ubiquitousness of smartphones enables people to announce an emergency
they're observing in real-time. Because of this, more agencies are interested
in programatically monitoring Twitter (disaster relief organizations and news
agencies) and therefore recognizing the informativeness of a tweet can help
filter noise from large volumes of data. In this paper, we present our
submission for WNUT-2020 Task 2: Identification of informative COVID-19 English
Tweets. Our most successful model is an ensemble of transformers including
RoBERTa, XLNet, and BERTweet trained in a semi-supervised experimental setting.
The proposed system achieves a F1 score of 0.9011 on the test set (ranking 7th
on the leaderboard), and shows significant gains in performance compared to a
baseline system using fasttext embeddings.Comment: 5 pages + 1 Appendix draft (after review
Recommended from our members
Identifying and Processing Crisis Information from Social Media
Social media platforms play a crucial role in how people communicate, particularly during crisis situations such as natural disasters. People share and disseminate information on social media platforms that relates to updates, alerts, rescue and relief requests among other crisis relevant information. Hurricane Harvey and Hurricane Sandy saw over tens of millions of posts getting generated, on Twitter, in a short span of time. The ambit of such posts spreads across a wide range such as personal and official communications, and citizen sensing, to mention a few. This makes social media platforms a source of vital information to different stakeholders in crisis situations such as impacted communities, relief agencies, and civic authorities. However, the overwhelming volume of data generated during such times, makes it impossible to manually identify information relevant to crisis. Additionally, a large portion of posts in voluminous streams is not relevant or bears minimal relevance to crisis situations.
This has steered much research towards exploring methods that can automatically identify crisis relevant information from voluminous streams of data during such scenarios. However, the problem of identifying crisis relevant information from social media platforms, such as Twitter, is not trivial given the nature of unstructured text such as short text length and syntactic variations among other challenges. A key objective, while creating automatic crisis relevancy classification systems, is to make them adaptable to a wide range of crisis types and languages. Many related approaches rely on statistical features which are quantifiable properties and linguistic properties of the text. A general approach is to train the classification model on labelled data acquired from crisis events and evaluate on other crisis events. A key aspect missing from explored literature is the validity of crisis relevancy classification models when applied to data from unseen types of crisis events and languages. For instance, how would the accuracy of a crisis relevancy classification model, trained on earthquake type of events, change when applied to flood type of events. Or, how would a model perform when trained on crisis data in English but applied to data in Italian.
This thesis investigates these problems from a semantics perspective, where the challenges posed by diverse types of crisis and language variations are seen as the problems that can be tackled by enriching the data semantically. The use of knowledge bases such as DBpedia, BabelNet, and Wikipedia, for semantic enrichment of data in text classification problems has often been studied. Semantic enrichment of data through entity linking and expansion of context via knowledge bases can take advantage of connections between different concepts and thus enhance contextual coherency across crisis types and languages. Several previous works have focused on similar problems and proposed approaches using statistical features and/or non-semantic features. The use of semantics extracted through knowledge graphs has remained unexplored in building crisis relevancy classifiers that are adaptive to varying crisis types and multilingual data. Experiments conducted in this thesis consider data from Twitter, a micro-blogging social media platform, and analyse multiple aspects of crisis data classification. The results obtained through various analyses in this thesis demonstrate the value of semantic enrichment of text through knowledge graphs in improving the adaptability of crisis relevancy classifiers across crisis types and languages, in comparison to statistical features as often used in much of the related work
A Study on the Improvement of Data Collection in Data Centers and Its Analysis on Deep Learning-based Applications
Big data are usually stored in data center networks for processing and analysis through various cloud applications. Such applications are a collection of data-intensive jobs which often involve many parallel flows and are network bound in the distributed environment. The recent networking abstraction, coflow, for data parallel programming paradigm to express the communication requirements has opened new opportunities to network scheduling for such applications. Therefore, I propose coflow based network scheduling algorithm, Coflourish, to enhance the job completion time for such data-parallel applications, in the presence of the increased background traffic to mimic the cloud environment infrastructure. It outperforms Varys, the state-of-the-art coflow scheduling technique, by 75.5% under various workload conditions. However, such technique often requires customized operating systems, customized computing frameworks or external proprietary software-defined networking (SDN) switches. Consequently, in order to achieve the minimal application completion time, through coflow scheduling, coflow routing, and per-rate per-flow scheduling paradigm with minimum customization to the hosts and switches, I propose another scheduling technique, MinCOF which exploits the OpenFlow SDN. MinCOF provides faster deployability and no proprietary system requirements. It also decreases the average coflow completion time by 12.94% compared to the latest OpenFlow-based coflow scheduling and routing framework. Although the challenges related to analysis and processing of big data can be handled effectively through addressing the network issues. Sometimes, there are also challenges to analyze data effectively due to the limited data size. To further analyze such collected data, I use various deep learning approaches. Specifically, I design a framework to collect Twitter data during natural disaster events and then deploy deep learning model to detect the fake news spreading during such crisis situations. The wide-spread of fake news during disaster events disrupts the rescue missions and recovery activities, costing human lives and delayed response. My deep learning model classifies such fake events with 91.47% accuracy and F1 score of 90.89 to help the emergency managers during crisis. Therefore, this study focuses on providing network solutions to decrease the application completion time in the cloud environment, in addition to analyze the data collected using the deployed network framework to further use it to solve the real-world problems using the various deep learning approaches
Keyphrase Extraction from Disaster-related Tweets
While keyphrase extraction has received considerable attention in recent
years, relatively few studies exist on extracting keyphrases from social media
platforms such as Twitter, and even fewer for extracting disaster-related
keyphrases from such sources. During a disaster, keyphrases can be extremely
useful for filtering relevant tweets that can enhance situational awareness.
Previously, joint training of two different layers of a stacked Recurrent
Neural Network for keyword discovery and keyphrase extraction had been shown to
be effective in extracting keyphrases from general Twitter data. We improve the
model's performance on both general Twitter data and disaster-related Twitter
data by incorporating contextual word embeddings, POS-tags, phonetics, and
phonological features. Moreover, we discuss the shortcomings of the often used
F1-measure for evaluating the quality of predicted keyphrases with respect to
the ground truth annotations. Instead of the F1-measure, we propose the use of
embedding-based metrics to better capture the correctness of the predicted
keyphrases. In addition, we also present a novel extension of an
embedding-based metric. The extension allows one to better control the penalty
for the difference in the number of ground-truth and predicted keyphrasesComment: 12 pages, 7 figure
Macro-micro approach for mining public sociopolitical opinion from social media
During the past decade, we have witnessed the emergence of social media, which has prominence as a means for the general public to exchange opinions towards a broad range of topics. Furthermore, its social and temporal dimensions make it a rich resource for policy makers and organisations to understand public opinion. In this thesis, we present our research in understanding public opinion on Twitter along three dimensions: sentiment, topics and summary.
In the first line of our work, we study how to classify public sentiment on Twitter. We focus on the task of multi-target-specific sentiment recognition on Twitter, and propose an approach which utilises the syntactic information from parse-tree in conjunction with the left-right context of the target. We show the state-of-the-art performance on two datasets including a multi-target Twitter corpus on UK elections which we make public available for the research community. Additionally we also conduct two preliminary studies including cross-domain emotion classification on discourse around arts and cultural experiences, and social spam detection to improve the signal-to-noise ratio of our sentiment corpus.
Our second line of work focuses on automatic topical clustering of tweets. Our aim is to group tweets into a number of clusters, with each cluster representing a meaningful topic, story, event or a reason behind a particular choice of sentiment. We explore various ways of tackling this challenge and propose a two-stage hierarchical topic modelling system that is efficient and effective in achieving our goal.
Lastly, for our third line of work, we study the task of summarising tweets on common topics, with the goal to provide informative summaries for real-world events/stories or explanation underlying the sentiment expressed towards an issue/entity. As most existing tweet summarisation approaches rely on extractive methods, we propose to apply state-of-the-art neural abstractive summarisation model for tweets. We also tackle the challenge of cross-medium supervised summarisation with no target-medium training resources. To the best of our knowledge, there is no existing work on studying neural abstractive summarisation on tweets. In addition, we present a system for providing interactive visualisation of topic-entity sentiments and the corresponding summaries in chronological order.
Throughout our work presented in this thesis, we conduct experiments to evaluate and verify the effectiveness of our proposed models, comparing to relevant baseline methods. Most of our evaluations are quantitative, however, we do perform qualitative analyses where it is appropriate. This thesis provides insights and findings that can be used for better understanding public opinion in social media
Stand for Something or Fall for Everything: Predict Misinformation Spread with Stance-Aware Graph Neural Networks
Although pervasive spread of misinformation on social media platforms has become a pressing challenge, existing platform interventions have shown limited success in curbing its dissemination. In this study, we propose a stance-aware graph neural network (stance-aware GNN) that leverages users’ stances to proactively predict misinformation spread. As different user stances can form unique echo chambers, we customize four information passing paths in stance-aware GNN, while the trainable attention weights provide explainability by highlighting each structure\u27s importance. Evaluated on a real-world dataset, stance-aware GNN outperforms benchmarks by 32.65% and exceeds advanced GNNs without user stance by over 4.69%. Furthermore, the attention weights indicate that users’ opposition stances have a higher impact on their neighbors’ behaviors than supportive ones, which function as social correction to halt misinformation propagation. Overall, our study provides an effective predictive model for platforms to combat misinformation, and highlights the impact of user stances in the misinformation propagation
Six papers on computational methods for the analysis of structured and unstructured data in the economic domain
This work investigates the application of computational methods for structured and unstructured data. The domains of application are two closely connected fields with the common
goal of promoting the stability of the financial system: systemic risk and bank supervision.
The work explores different families of models and applies them to different tasks: graphical Gaussian network models to address bank interconnectivity, topic models to monitor
bank news and deep learning for text classification. New applications and variants of these
models are investigated posing a particular attention on the combined use of textual and structured data. In the penultimate chapter is introduced a sentiment polarity classification tool in
Italian, based on deep learning, to simplify future researches relying on sentiment analysis.
The different models have proven useful for leveraging numerical (structured) and textual (unstructured) data. Graphical Gaussian Models and Topic models have been adopted
for inspection and descriptive tasks while deep learning has been applied more for predictive
(classification) problems. Overall, the integration of textual (unstructured) and numerical
(structured) information has proven useful for systemic risk and bank supervision related
analysis. The integration of textual data with numerical data in fact, has brought either to
higher predictive performances or enhanced capability of explaining phenomena and correlating them to other events.This work investigates the application of computational methods for structured and unstructured data. The domains of application are two closely connected fields with the common
goal of promoting the stability of the financial system: systemic risk and bank supervision.
The work explores different families of models and applies them to different tasks: graphical Gaussian network models to address bank interconnectivity, topic models to monitor
bank news and deep learning for text classification. New applications and variants of these
models are investigated posing a particular attention on the combined use of textual and structured data. In the penultimate chapter is introduced a sentiment polarity classification tool in
Italian, based on deep learning, to simplify future researches relying on sentiment analysis.
The different models have proven useful for leveraging numerical (structured) and textual (unstructured) data. Graphical Gaussian Models and Topic models have been adopted
for inspection and descriptive tasks while deep learning has been applied more for predictive
(classification) problems. Overall, the integration of textual (unstructured) and numerical
(structured) information has proven useful for systemic risk and bank supervision related
analysis. The integration of textual data with numerical data in fact, has brought either to
higher predictive performances or enhanced capability of explaining phenomena and correlating them to other events
When Silver Is As Good As Gold: Using Weak Supervision to Train Machine Learning Models on Social Media Data
Over the last decade, advances in machine learning have led to an exponential growth in artificial intelligence i.e., machine learning models capable of learning from vast amounts of data to perform several tasks such as text classification, regression, machine translation, speech recognition, and many others. While massive volumes of data are available, due to the manual curation process involved in the generation of training datasets, only a percentage of the data is used to train machine learning models. The process of labeling data with a ground-truth value is extremely tedious, expensive, and is the major bottleneck of supervised learning. To curtail this, the theory of noisy learning can be employed where data labeled through heuristics, knowledge bases and weak classifiers can be utilized for training, instead of data obtained through manual annotation. The assumption here is that a large volume of training data, which contains noise and acquired through an automated process, can compensate for the lack of manual labels. In this study, we utilize heuristic based approaches to create noisy silver standard datasets. We extensively tested the theory of noisy learning on four different applications by training several machine learning models using the silver standard dataset with several sample sizes and class imbalances and tested the performance using a gold standard dataset. Our evaluations on the four applications indicate the success of silver standard datasets in identifying a gold standard dataset. We conclude the study with evidence that noisy social media data can be utilized for weak supervisio
- …