1,701 research outputs found
Location Reference Recognition from Texts: A Survey and Comparison
A vast amount of location information exists in unstructured texts, such as social media posts, news stories, scientific articles, web pages, travel blogs, and historical archives. Geoparsing refers to recognizing location references from texts and identifying their geospatial representations. While geoparsing can benefit many domains, a summary of its specific applications is still missing. Further, there is a lack of a comprehensive review and comparison of existing approaches for location reference recognition, which is the first and core step of geoparsing. To fill these research gaps, this review first summarizes seven typical application domains of geoparsing: geographic information retrieval, disaster management, disease surveillance, traffic management, spatial humanities, tourism management, and crime management. We then review existing approaches for location reference recognition by categorizing these approaches into four groups based on their underlying functional principle: rule-based, gazetteer matching–based, statistical learning-–based, and hybrid approaches. Next, we thoroughly evaluate the correctness and computational efficiency of the 27 most widely used approaches for location reference recognition based on 26 public datasets with different types of texts (e.g., social media posts and news stories) containing 39,736 location references worldwide. Results from this thorough evaluation can help inform future methodological developments and can help guide the selection of proper approaches based on application needs
REDAffectiveLM: Leveraging Affect Enriched Embedding and Transformer-based Neural Language Model for Readers' Emotion Detection
Technological advancements in web platforms allow people to express and share
emotions towards textual write-ups written and shared by others. This brings
about different interesting domains for analysis; emotion expressed by the
writer and emotion elicited from the readers. In this paper, we propose a novel
approach for Readers' Emotion Detection from short-text documents using a deep
learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is
well understood that utilizing context-specific representations from
transformer-based pre-trained language models helps achieve improved
performance. Within this affective computing task, we explore how incorporating
affective information can further enhance performance. Towards this, we
leverage context-specific and affect enriched representations by using a
transformer-based pre-trained language model in tandem with affect enriched
Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k,
besides using RENh-4k and SemEval-2007. We evaluate the performance of our
REDAffectiveLM rigorously across these datasets, against a vast set of
state-of-the-art baselines, where our model consistently outperforms baselines
and obtains statistically significant results. Our results establish that
utilizing affect enriched representation along with context-specific
representation within a neural architecture can considerably enhance readers'
emotion detection. Since the impact of affect enrichment specifically in
readers' emotion detection isn't well explored, we conduct a detailed analysis
over affect enriched Bi-LSTM+Attention using qualitative and quantitative model
behavior evaluation techniques. We observe that compared to conventional
semantic embedding, affect enriched embedding increases ability of the network
to effectively identify and assign weightage to key terms responsible for
readers' emotion detection
Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification
National Research Foundation (NRF) Singapor
Target-oriented Sentiment Classification with Sequential Cross-modal Semantic Graph
Multi-modal aspect-based sentiment classification (MABSC) is task of
classifying the sentiment of a target entity mentioned in a sentence and an
image. However, previous methods failed to account for the fine-grained
semantic association between the image and the text, which resulted in limited
identification of fine-grained image aspects and opinions. To address these
limitations, in this paper we propose a new approach called SeqCSG, which
enhances the encoder-decoder sentiment classification framework using
sequential cross-modal semantic graphs. SeqCSG utilizes image captions and
scene graphs to extract both global and local fine-grained image information
and considers them as elements of the cross-modal semantic graph along with
tokens from tweets. The sequential cross-modal semantic graph is represented as
a sequence with a multi-modal adjacency matrix indicating relationships between
elements. Experimental results show that the approach outperforms existing
methods and achieves state-of-the-art performance on two standard datasets.
Further analysis has demonstrated that the model can implicitly learn the
correlation between fine-grained information of the image and the text with the
given target. Our code is available at https://github.com/zjukg/SeqCSG.Comment: ICANN 2023, https://github.com/zjukg/SeqCS
- …