88,548 research outputs found
Topic Independent Identification of Agreement and Disagreement in Social Media Dialogue
Research on the structure of dialogue has been hampered for years because
large dialogue corpora have not been available. This has impacted the dialogue
research community's ability to develop better theories, as well as good off
the shelf tools for dialogue processing. Happily, an increasing amount of
information and opinion exchange occur in natural dialogue in online forums,
where people share their opinions about a vast range of topics. In particular
we are interested in rejection in dialogue, also called disagreement and
denial, where the size of available dialogue corpora, for the first time,
offers an opportunity to empirically test theoretical accounts of the
expression and inference of rejection in dialogue. In this paper, we test
whether topic-independent features motivated by theoretical predictions can be
used to recognize rejection in online forums in a topic independent way. Our
results show that our theoretically motivated features achieve 66% accuracy, an
improvement over a unigram baseline of an absolute 6%.Comment: @inproceedings{Misra2013TopicII, title={Topic Independent
Identification of Agreement and Disagreement in Social Media Dialogue},
author={Amita Misra and Marilyn A. Walker}, booktitle={SIGDIAL Conference},
year={2013}
Indexing, browsing and searching of digital video
Video is a communications medium that normally brings together moving pictures with a synchronised audio track into a discrete piece or pieces of information. The size of a “piece ” of video can variously be referred to as a frame, a shot, a scene, a clip, a programme or an episode, and these are distinguished by their lengths and by their composition. We shall return to the definition of each of these in section 4 this chapter. In modern society, video is ver
Multimodal Speech Emotion Recognition Using Audio and Text
Speech emotion recognition is a challenging task, and extensive reliance has
been placed on models that use audio features in building well-performing
classifiers. In this paper, we propose a novel deep dual recurrent encoder
model that utilizes text data and audio signals simultaneously to obtain a
better understanding of speech data. As emotional dialogue is composed of sound
and spoken content, our model encodes the information from audio and text
sequences using dual recurrent neural networks (RNNs) and then combines the
information from these sources to predict the emotion class. This architecture
analyzes speech data from the signal level to the language level, and it thus
utilizes the information within the data more comprehensively than models that
focus on audio features. Extensive experiments are conducted to investigate the
efficacy and properties of the proposed model. Our proposed model outperforms
previous state-of-the-art methods in assigning data to one of four emotion
categories (i.e., angry, happy, sad and neutral) when the model is applied to
the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.Comment: 7 pages, Accepted as a conference paper at IEEE SLT 201
- …