7,838 research outputs found
A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews
Despite the recent advances in opinion mining for written reviews, few works
have tackled the problem on other sources of reviews. In light of this issue,
we propose a multi-modal approach for mining fine-grained opinions from video
reviews that is able to determine the aspects of the item under review that are
being discussed and the sentiment orientation towards them. Our approach works
at the sentence level without the need for time annotations and uses features
derived from the audio, video and language transcriptions of its contents. We
evaluate our approach on two datasets and show that leveraging the video and
audio modalities consistently provides increased performance over text-only
baselines, providing evidence these extra modalities are key in better
understanding video reviews.Comment: Second Grand Challenge and Workshop on Multimodal Language ACL 202
Recognizing cited facts and principles in legal judgements
In common law jurisdictions, legal professionals cite facts and legal principles from precedent cases to support their arguments before the court for their intended outcome in a current case. This practice stems from the doctrine of stare decisis, where cases that have similar facts should receive similar decisions with respect to the principles. It is essential for legal professionals to identify such facts and principles in precedent cases, though this is a highly time intensive task. In this paper, we present studies that demonstrate that human annotators can achieve reasonable agreement on which sentences in legal judgements contain cited facts and principles (respectively, κ=0.65 and κ=0.95 for inter- and intra-annotator agreement). We further demonstrate that it is feasible to automatically annotate sentences containing such legal facts and principles in a supervised machine learning framework based on linguistic features, reporting per category precision and recall figures of between 0.79 and 0.89 for classifying sentences in legal judgements as cited facts, principles or neither using a Bayesian classifier, with an overall κ of 0.72 with the human-annotated gold standard
Transfer Learning for Speech and Language Processing
Transfer learning is a vital technique that generalizes models trained for
one setting or task to other settings or tasks. For example in speech
recognition, an acoustic model trained for one language can be used to
recognize speech in another language, with little or no re-training data.
Transfer learning is closely related to multi-task learning (cross-lingual vs.
multilingual), and is traditionally studied in the name of `model adaptation'.
Recent advance in deep learning shows that transfer learning becomes much
easier and more effective with high-level abstract features learned by deep
models, and the `transfer' can be conducted not only between data distributions
and data types, but also between model structures (e.g., shallow nets and deep
nets) or even model types (e.g., Bayesian models and neural models). This
review paper summarizes some recent prominent research towards this direction,
particularly for speech and language processing. We also report some results
from our group and highlight the potential of this very interesting research
field.Comment: 13 pages, APSIPA 201
Neuro-Inspired Hierarchical Multimodal Learning
Integrating and processing information from various sources or modalities are
critical for obtaining a comprehensive and accurate perception of the real
world. Drawing inspiration from neuroscience, we develop the
Information-Theoretic Hierarchical Perception (ITHP) model, which utilizes the
concept of information bottleneck. Distinct from most traditional fusion models
that aim to incorporate all modalities as input, our model designates the prime
modality as input, while the remaining modalities act as detectors in the
information pathway. Our proposed perception model focuses on constructing an
effective and compact information flow by achieving a balance between the
minimization of mutual information between the latent state and the input modal
state, and the maximization of mutual information between the latent states and
the remaining modal states. This approach leads to compact latent state
representations that retain relevant information while minimizing redundancy,
thereby substantially enhancing the performance of downstream tasks.
Experimental evaluations on both the MUStARD and CMU-MOSI datasets demonstrate
that our model consistently distills crucial information in multimodal learning
scenarios, outperforming state-of-the-art benchmarks
SentiCap: Generating Image Descriptions with Sentiments
The recent progress on image recognition and language modeling is making
automatic description of image content a reality. However, stylized,
non-factual aspects of the written description are missing from the current
systems. One such style is descriptions with emotions, which is commonplace in
everyday communication, and influences decision-making and interpersonal
relationships. We design a system to describe an image with emotions, and
present a model that automatically generates captions with positive or negative
sentiments. We propose a novel switching recurrent neural network with
word-level regularization, which is able to produce emotional image captions
using only 2000+ training sentences containing sentiments. We evaluate the
captions with different automatic and crowd-sourcing metrics. Our model
compares favourably in common quality metrics for image captioning. In 84.6% of
cases the generated positive captions were judged as being at least as
descriptive as the factual captions. Of these positive captions 88% were
confirmed by the crowd-sourced workers as having the appropriate sentiment
The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and Improvements
Truly real-life data presents a strong, but exciting challenge for sentiment
and emotion research. The high variety of possible `in-the-wild' properties
makes large datasets such as these indispensable with respect to building
robust machine learning models. A sufficient quantity of data covering a deep
variety in the challenges of each modality to force the exploratory analysis of
the interplay of all modalities has not yet been made available in this
context. In this contribution, we present MuSe-CaR, a first of its kind
multimodal dataset. The data is publicly available as it recently served as the
testing bed for the 1st Multimodal Sentiment Analysis Challenge, and focused on
the tasks of emotion, emotion-target engagement, and trustworthiness
recognition by means of comprehensively integrating the audio-visual and
language modalities. Furthermore, we give a thorough overview of the dataset in
terms of collection and annotation, including annotation tiers not used in this
year's MuSe 2020. In addition, for one of the sub-challenges - predicting the
level of trustworthiness - no participant outperformed the baseline model, and
so we propose a simple, but highly efficient Multi-Head-Attention network that
exceeds using multimodal fusion the baseline by around 0.2 CCC (almost 50 %
improvement).Comment: accepted versio
CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition
Multimodal Emotion Recognition in Conversation (ERC) has garnered growing
attention from research communities in various fields. In this paper, we
propose a cross-modal fusion network with emotion-shift awareness (CFN-ESA) for
ERC. Extant approaches employ each modality equally without distinguishing the
amount of emotional information, rendering it hard to adequately extract
complementary and associative information from multimodal data. To cope with
this problem, in CFN-ESA, textual modalities are treated as the primary source
of emotional information, while visual and acoustic modalities are taken as the
secondary sources. Besides, most multimodal ERC models ignore emotion-shift
information and overfocus on contextual information, leading to the failure of
emotion recognition under emotion-shift scenario. We elaborate an emotion-shift
module to address this challenge. CFN-ESA mainly consists of the unimodal
encoder (RUME), cross-modal encoder (ACME), and emotion-shift module (LESM).
RUME is applied to extract conversation-level contextual emotional cues while
pulling together the data distributions between modalities; ACME is utilized to
perform multimodal interaction centered on textual modality; LESM is used to
model emotion shift and capture related information, thereby guide the learning
of the main task. Experimental results demonstrate that CFN-ESA can effectively
promote performance for ERC and remarkably outperform the state-of-the-art
models.Comment: 13 pages, 10 figure
- …