6,275 research outputs found
A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews
Despite the recent advances in opinion mining for written reviews, few works
have tackled the problem on other sources of reviews. In light of this issue,
we propose a multi-modal approach for mining fine-grained opinions from video
reviews that is able to determine the aspects of the item under review that are
being discussed and the sentiment orientation towards them. Our approach works
at the sentence level without the need for time annotations and uses features
derived from the audio, video and language transcriptions of its contents. We
evaluate our approach on two datasets and show that leveraging the video and
audio modalities consistently provides increased performance over text-only
baselines, providing evidence these extra modalities are key in better
understanding video reviews.Comment: Second Grand Challenge and Workshop on Multimodal Language ACL 202
Exploiting BERT for End-to-End Aspect-based Sentiment Analysis
In this paper, we investigate the modeling power of contextualized embeddings
from pre-trained language models, e.g. BERT, on the E2E-ABSA task.
Specifically, we build a series of simple yet insightful neural baselines to
deal with E2E-ABSA. The experimental results show that even with a simple
linear classification layer, our BERT-based architecture can outperform
state-of-the-art works. Besides, we also standardize the comparative study by
consistently utilizing a hold-out validation dataset for model selection, which
is largely ignored by previous works. Therefore, our work can serve as a
BERT-based benchmark for E2E-ABSA.Comment: NUT workshop@EMNLP-IJCNLP-201
MEMD-ABSA: A Multi-Element Multi-Domain Dataset for Aspect-Based Sentiment Analysis
Aspect-based sentiment analysis is a long-standing research interest in the
field of opinion mining, and in recent years, researchers have gradually
shifted their focus from simple ABSA subtasks to end-to-end multi-element ABSA
tasks. However, the datasets currently used in the research are limited to
individual elements of specific tasks, usually focusing on in-domain settings,
ignoring implicit aspects and opinions, and with a small data scale. To address
these issues, we propose a large-scale Multi-Element Multi-Domain dataset
(MEMD) that covers the four elements across five domains, including nearly
20,000 review sentences and 30,000 quadruples annotated with explicit and
implicit aspects and opinions for ABSA research. Meanwhile, we evaluate
generative and non-generative baselines on multiple ABSA subtasks under the
open domain setting, and the results show that open domain ABSA as well as
mining implicit aspects and opinions remain ongoing challenges to be addressed.
The datasets are publicly released at \url{https://github.com/NUSTM/MEMD-ABSA}
GIELLM: Japanese General Information Extraction Large Language Model Utilizing Mutual Reinforcement Effect
Information Extraction (IE) stands as a cornerstone in natural language
processing, traditionally segmented into distinct sub-tasks. The advent of
Large Language Models (LLMs) heralds a paradigm shift, suggesting the
feasibility of a singular model addressing multiple IE subtasks. In this vein,
we introduce the General Information Extraction Large Language Model (GIELLM),
which integrates text Classification, Sentiment Analysis, Named Entity
Recognition, Relation Extraction, and Event Extraction using a uniform
input-output schema. This innovation marks the first instance of a model
simultaneously handling such a diverse array of IE subtasks. Notably, the
GIELLM leverages the Mutual Reinforcement Effect (MRE), enhancing performance
in integrated tasks compared to their isolated counterparts. Our experiments
demonstrate State-of-the-Art (SOTA) results in five out of six Japanese mixed
datasets, significantly surpassing GPT-3.5-Turbo. Further, an independent
evaluation using the novel Text Classification Relation and Event
Extraction(TCREE) dataset corroborates the synergistic advantages of MRE in
text and word classification. This breakthrough paves the way for most IE
subtasks to be subsumed under a singular LLM framework. Specialized fine-tune
task-specific models are no longer needed.Comment: 10 pages, 6 figure
- …