6,218 research outputs found
Extraction and Analysis of Dynamic Conversational Networks from TV Series
Identifying and characterizing the dynamics of modern tv series subplots is
an open problem. One way is to study the underlying social network of
interactions between the characters. Standard dynamic network extraction
methods rely on temporal integration, either over the whole considered period,
or as a sequence of several time-slices. However, they turn out to be
inappropriate in the case of tv series, because the scenes shown onscreen
alternatively focus on parallel storylines, and do not necessarily respect a
traditional chronology. In this article, we introduce Narrative Smoothing, a
novel network extraction method taking advantage of the plot properties to
solve some of their limitations. We apply our method to a corpus of 3 popular
series, and compare it to both standard approaches. Narrative smoothing leads
to more relevant observations when it comes to the characterization of the
protagonists and their relationships, confirming its appropriateness to model
the intertwined storylines constituting the plots.Comment: arXiv admin note: substantial text overlap with arXiv:1602.0781
Combining link and content-based information in a Bayesian inference model for entity search
An architectural model of a Bayesian inference network to support entity search in semantic knowledge bases is presented. The model supports the explicit combination of primitive data type and object-level semantics under a single computational framework. A flexible query model is supported capable to reason with the availability of simple semantics in querie
SALSA: A Novel Dataset for Multimodal Group Behavior Analysis
Studying free-standing conversational groups (FCGs) in unstructured social
settings (e.g., cocktail party ) is gratifying due to the wealth of information
available at the group (mining social networks) and individual (recognizing
native behavioral and personality traits) levels. However, analyzing social
scenes involving FCGs is also highly challenging due to the difficulty in
extracting behavioral cues such as target locations, their speaking activity
and head/body pose due to crowdedness and presence of extreme occlusions. To
this end, we propose SALSA, a novel dataset facilitating multimodal and
Synergetic sociAL Scene Analysis, and make two main contributions to research
on automated social interaction analysis: (1) SALSA records social interactions
among 18 participants in a natural, indoor environment for over 60 minutes,
under the poster presentation and cocktail party contexts presenting
difficulties in the form of low-resolution images, lighting variations,
numerous occlusions, reverberations and interfering sound sources; (2) To
alleviate these problems we facilitate multimodal analysis by recording the
social interplay using four static surveillance cameras and sociometric badges
worn by each participant, comprising the microphone, accelerometer, bluetooth
and infrared sensors. In addition to raw data, we also provide annotations
concerning individuals' personality as well as their position, head, body
orientation and F-formation information over the entire event duration. Through
extensive experiments with state-of-the-art approaches, we show (a) the
limitations of current methods and (b) how the recorded multiple cues
synergetically aid automatic analysis of social interactions. SALSA is
available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure
Whose Emotion Matters? Speaking Activity Localisation without Prior Knowledge
The task of emotion recognition in conversations (ERC) benefits from the
availability of multiple modalities, as provided, for example, in the
video-based Multimodal EmotionLines Dataset (MELD). However, only a few
research approaches use both acoustic and visual information from the MELD
videos. There are two reasons for this: First, label-to-video alignments in
MELD are noisy, making those videos an unreliable source of emotional speech
data. Second, conversations can involve several people in the same scene, which
requires the localisation of the utterance source. In this paper, we introduce
MELD with Fixed Audiovisual Information via Realignment (MELD-FAIR) by using
recent active speaker detection and automatic speech recognition models, we are
able to realign the videos of MELD and capture the facial expressions from
speakers in 96.92% of the utterances provided in MELD. Experiments with a
self-supervised voice recognition model indicate that the realigned MELD-FAIR
videos more closely match the transcribed utterances given in the MELD dataset.
Finally, we devise a model for emotion recognition in conversations trained on
the realigned MELD-FAIR videos, which outperforms state-of-the-art models for
ERC based on vision alone. This indicates that localising the source of
speaking activities is indeed effective for extracting facial expressions from
the uttering speakers and that faces provide more informative visual cues than
the visual features state-of-the-art models have been using so far. The
MELD-FAIR realignment data, and the code of the realignment procedure and of
the emotional recognition, are available at
https://github.com/knowledgetechnologyuhh/MELD-FAIR.Comment: 17 pages, 8 figures, 7 tables, Published in Neurocomputin
Guiding the PLMs with Semantic Anchors as Intermediate Supervision: Towards Interpretable Semantic Parsing
The recent prevalence of pretrained language models (PLMs) has dramatically
shifted the paradigm of semantic parsing, where the mapping from natural
language utterances to structured logical forms is now formulated as a Seq2Seq
task. Despite the promising performance, previous PLM-based approaches often
suffer from hallucination problems due to their negligence of the structural
information contained in the sentence, which essentially constitutes the key
semantics of the logical forms. Furthermore, most works treat PLM as a black
box in which the generation process of the target logical form is hidden
beneath the decoder modules, which greatly hinders the model's intrinsic
interpretability. To address these two issues, we propose to incorporate the
current PLMs with a hierarchical decoder network. By taking the first-principle
structures as the semantic anchors, we propose two novel intermediate
supervision tasks, namely Semantic Anchor Extraction and Semantic Anchor
Alignment, for training the hierarchical decoders and probing the model
intermediate representations in a self-adaptive manner alongside the
fine-tuning process. We conduct intensive experiments on several semantic
parsing benchmarks and demonstrate that our approach can consistently
outperform the baselines. More importantly, by analyzing the intermediate
representations of the hierarchical decoders, our approach also makes a huge
step toward the intrinsic interpretability of PLMs in the domain of semantic
parsing
Complex Knowledge Base Question Answering: A Survey
Knowledge base question answering (KBQA) aims to answer a question over a
knowledge base (KB). Early studies mainly focused on answering simple questions
over KBs and achieved great success. However, their performance on complex
questions is still far from satisfactory. Therefore, in recent years,
researchers propose a large number of novel methods, which looked into the
challenges of answering complex questions. In this survey, we review recent
advances on KBQA with the focus on solving complex questions, which usually
contain multiple subjects, express compound relations, or involve numerical
operations. In detail, we begin with introducing the complex KBQA task and
relevant background. Then, we describe benchmark datasets for complex KBQA task
and introduce the construction process of these datasets. Next, we present two
mainstream categories of methods for complex KBQA, namely semantic
parsing-based (SP-based) methods and information retrieval-based (IR-based)
methods. Specifically, we illustrate their procedures with flow designs and
discuss their major differences and similarities. After that, we summarize the
challenges that these two categories of methods encounter when answering
complex questions, and explicate advanced solutions and techniques used in
existing work. Finally, we conclude and discuss several promising directions
related to complex KBQA for future research.Comment: 20 pages, 4 tables, 7 figures. arXiv admin note: text overlap with
arXiv:2105.1164
深層学習に基づく感情会話分析に関する研究
Owning the capability to express specific emotions by a chatbot during a conversation is one of the key parts of artificial intelligence, which has an intuitive and quantifiable impact on the improvement of chatbot’s usability and user satisfaction. Enabling machines to emotion recognition in conversation is challenging, mainly because the information in human dialogue innately conveys emotions by long-term experience, abundant knowledge, context, and the intricate patterns between the affective states. Recently, many studies on neural emotional conversational models have been conducted. However, enabling the chatbot to control what kind of emotion to respond to upon its own characters in conversation is still underexplored. At this stage, people are no longer satisfied with using a dialogue system to solve specific tasks, and are more eager to achieve spiritual communication. In the chat process, if the robot can perceive the user's emotions and can accurately process them, it can greatly enrich the content of the dialogue and make the user empathize.
In the process of emotional dialogue, our ultimate goal is to make the machine understand human emotions and give matching responses. Based on these two points, this thesis explores and in-depth emotion recognition in conversation task and emotional dialogue generation task. In the past few years, although considerable progress has been made in emotional research in dialogue, there are still some difficulties and challenges due to the complex nature of human emotions. The key contributions in this thesis are summarized as below:
(1) Researchers have paid more attention to enhancing natural language models with knowledge graphs these days, since knowledge graph has gained a lot of systematic knowledge. A large number of studies had shown that the introduction of external commonsense knowledge is very helpful to improve the characteristic information. We address the task of emotion recognition in conversations using external knowledge to enhance semantics. In this work, we employ an external knowledge graph ATOMIC to extract the knowledge sources. We proposed KES model, a new framework that incorporates different elements of external knowledge and conversational semantic role labeling, where build upon them to learn interactions between interlocutors participating in a conversation. The conversation is a sequence of coherent and orderly discourses. For neural networks, the capture of long-range context information is a weakness. We adopt Transformer a structure composed of self-attention and feed forward neural network, instead of the traditional RNN model, aiming at capturing remote context information. We design a self-attention layer specialized for enhanced semantic text features with external commonsense knowledge. Then, two different networks composed of LSTM are responsible for tracking individual internal state and context external state. In addition, the proposed model has experimented on three datasets in emotion detection in conversation. The experimental results show that our model outperforms the state-of-the-art approaches on most of the tested datasets.
(2) We proposed an emotional dialogue model based on Seq2Seq, which is improved from three aspects: model input, encoder structure, and decoder structure, so that the model can generate responses with rich emotions, diversity, and context. In terms of model input, emotional information and location information are added based on word vectors. In terms of the encoder, the proposed model first encodes the current input and sentence sentiment to generate a semantic vector, and additionally encodes the context and sentence sentiment to generate a context vector, adding contextual information while ensuring the independence of the current input. On the decoder side, attention is used to calculate the weights of the two semantic vectors separately and then decode, to fully integrate the local emotional semantic information and the global emotional semantic information. We used seven objective evaluation indicators to evaluate the model's generation results, context similarity, response diversity, and emotional response. Experimental results show that the model can generate diverse responses with rich sentiment, contextual associations
Automatic transcription of multi-genre media archives
This paper describes some recent results of our collaborative work on
developing a speech recognition system for the automatic transcription
or media archives from the British Broadcasting Corporation (BBC). The
material includes a wide diversity of shows with their associated
metadata. The latter are highly diverse in terms of completeness,
reliability and accuracy. First, we investigate how to improve lightly
supervised acoustic training, when timestamp information is inaccurate
and when speech deviates significantly from the transcription, and how
to perform evaluations when no reference transcripts are available.
An automatic timestamp correction method as well as a word and segment
level combination approaches between the lightly supervised transcripts
and the original programme scripts are presented which yield improved
metadata. Experimental results show that systems trained using the
improved metadata consistently outperform those trained with only the
original lightly supervised decoding hypotheses. Secondly, we show that
the recognition task may benefit from systems trained on a combination
of in-domain and out-of-domain data. Working with tandem HMMs, we
describe Multi-level Adaptive Networks, a novel technique for
incorporating information from out-of domain posterior features using
deep neural network. We show that it provides a substantial reduction in
WER over other systems including a PLP-based baseline, in-domain tandem
features, and the best out-of-domain tandem features.This research was supported by EPSRC Programme Grant EP/I031022/1 (Natural Speech Technology).This paper was presented at the First Workshop on Speech, Language and Audio in Multimedia, August 22-23, 2013; Marseille. It was published in CEUR Workshop Proceedings at http://ceur-ws.org/Vol-1012/
- …