23,288 research outputs found
Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis
Target-based sentiment analysis or aspect-based sentiment analysis (ABSA)
refers to addressing various sentiment analysis tasks at a fine-grained level,
which includes but is not limited to aspect extraction, aspect sentiment
classification, and opinion extraction. There exist many solvers of the above
individual subtasks or a combination of two subtasks, and they can work
together to tell a complete story, i.e. the discussed aspect, the sentiment on
it, and the cause of the sentiment. However, no previous ABSA research tried to
provide a complete solution in one shot. In this paper, we introduce a new
subtask under ABSA, named aspect sentiment triplet extraction (ASTE).
Particularly, a solver of this task needs to extract triplets (What, How, Why)
from the inputs, which show WHAT the targeted aspects are, HOW their sentiment
polarities are and WHY they have such polarities (i.e. opinion reasons). For
instance, one triplet from "Waiters are very friendly and the pasta is simply
average" could be ('Waiters', positive, 'friendly'). We propose a two-stage
framework to address this task. The first stage predicts what, how and why in a
unified model, and then the second stage pairs up the predicted what (how) and
why from the first stage to output triplets. In the experiments, our framework
has set a benchmark performance in this novel triplet extraction task.
Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art
related methods.Comment: This paper is accepted in AAAI 202
Multimodal Sentiment Analysis Based on Deep Learning: Recent Progress
Multimodal sentiment analysis is an important research topic in the field of NLP, aiming to analyze speakers\u27 sentiment tendencies through features extracted from textual, visual, and acoustic modalities. Its main methods are based on machine learning and deep learning. Machine learning-based methods rely heavily on labeled data. But deep learning-based methods can overcome this shortcoming and capture the in-depth semantic information and modal characteristics of the data, as well as the interactive information between multimodal data. In this paper, we survey the deep learning-based methods, including fusion of text and image and fusion of text, image, audio, and video. Specifically, we discuss the main problems of these methods and the future directions. Finally, we review the work of multimodal sentiment analysis in conversation
Impact Of Content Features For Automatic Online Abuse Detection
Online communities have gained considerable importance in recent years due to
the increasing number of people connected to the Internet. Moderating user
content in online communities is mainly performed manually, and reducing the
workload through automatic methods is of great financial interest for community
maintainers. Often, the industry uses basic approaches such as bad words
filtering and regular expression matching to assist the moderators. In this
article, we consider the task of automatically determining if a message is
abusive. This task is complex since messages are written in a non-standardized
way, including spelling errors, abbreviations, community-specific codes...
First, we evaluate the system that we propose using standard features of online
messages. Then, we evaluate the impact of the addition of pre-processing
strategies, as well as original specific features developed for the community
of an online in-browser strategy game. We finally propose to analyze the
usefulness of this wide range of features using feature selection. This work
can lead to two possible applications: 1) automatically flag potentially
abusive messages to draw the moderator's attention on a narrow subset of
messages ; and 2) fully automate the moderation process by deciding whether a
message is abusive without any human intervention
- …