791 research outputs found
Joint Modeling of Content and Discourse Relations in Dialogues
We present a joint modeling approach to identify salient discussion points in
spoken meetings as well as to label the discourse relations between speaker
turns. A variation of our model is also discussed when discourse relations are
treated as latent variables. Experimental results on two popular meeting
corpora show that our joint model can outperform state-of-the-art approaches
for both phrase-based content selection and discourse relation prediction
tasks. We also evaluate our model on predicting the consistency among team
members' understanding of their group decisions. Classifiers trained with
features constructed from our model achieve significant better predictive
performance than the state-of-the-art.Comment: Accepted by ACL 2017. 11 page
Summarizing Dialogic Arguments from Social Media
Online argumentative dialog is a rich source of information on popular
beliefs and opinions that could be useful to companies as well as governmental
or public policy agencies. Compact, easy to read, summaries of these dialogues
would thus be highly valuable. A priori, it is not even clear what form such a
summary should take. Previous work on summarization has primarily focused on
summarizing written texts, where the notion of an abstract of the text is well
defined. We collect gold standard training data consisting of five human
summaries for each of 161 dialogues on the topics of Gay Marriage, Gun Control
and Abortion. We present several different computational models aimed at
identifying segments of the dialogues whose content should be used for the
summary, using linguistic features and Word2vec features with both SVMs and
Bidirectional LSTMs. We show that we can identify the most important arguments
by using the dialog context with a best F-measure of 0.74 for gun control, 0.71
for gay marriage, and 0.67 for abortion.Comment: Proceedings of the 21th Workshop on the Semantics and Pragmatics of
Dialogue (SemDial 2017
Analysis and Detection of Information Types of Open Source Software Issue Discussions
Most modern Issue Tracking Systems (ITSs) for open source software (OSS)
projects allow users to add comments to issues. Over time, these comments
accumulate into discussion threads embedded with rich information about the
software project, which can potentially satisfy the diverse needs of OSS
stakeholders. However, discovering and retrieving relevant information from the
discussion threads is a challenging task, especially when the discussions are
lengthy and the number of issues in ITSs are vast. In this paper, we address
this challenge by identifying the information types presented in OSS issue
discussions. Through qualitative content analysis of 15 complex issue threads
across three projects hosted on GitHub, we uncovered 16 information types and
created a labeled corpus containing 4656 sentences. Our investigation of
supervised, automated classification techniques indicated that, when prior
knowledge about the issue is available, Random Forest can effectively detect
most sentence types using conversational features such as the sentence length
and its position. When classifying sentences from new issues, Logistic
Regression can yield satisfactory performance using textual features for
certain information types, while falling short on others. Our work represents a
nontrivial first step towards tools and techniques for identifying and
obtaining the rich information recorded in the ITSs to support various software
engineering activities and to satisfy the diverse needs of OSS stakeholders.Comment: 41st ACM/IEEE International Conference on Software Engineering
(ICSE2019
Extractive Conversation Summarization Driven by Textual Entailment Prediction
Summarizing conversations like meetings, email threads or discussion forums poses relevant challenges on how to model the dialogue structure. Existing approaches mainly focus on premise-claim entailment relationships while neglecting contrasting or uncertain assertions. Furthermore, existing techniques are abstractive, thus requiring a training set consisting of humanly generated summaries. With the twofold aim of enriching the dialogue representation and addressing conversation summarization in the absence of training data, we present an extractive conversation summarization pipeline. We explore the use of contradictions and neutral premise-claim relations, both in the same document or in different documents. The results achieved on four datasets covering different domains show that applying unsupervised methods on top of a refined premise-claim selection achieves competitive performance in most domains
Identifying Product Defects from User Complaints: A Probabilistic Defect Model
The recent surge in using social media has created a massive amount of unstructured textual complaints about products and services. However, discovering potential product defects from large amounts of unstructured text is a nontrivial task. In this paper, we develop a probabilistic defect model (PDM) that identifies the most critical product issues and corresponding product attributes, simultaneously. We facilitate domain-oriented key attributes (e.g., product model, year of production, defective components, symptoms, etc.) of a product to identify and acquire integral information of defect. We conduct comprehensive evaluations including quantitative evaluations and qualitative evaluations to ensure the quality of discovered information. Experimental results demonstrate that our proposed model outperforms existing unsupervised method (K-Means Clustering), and could find more valuable information. Our research has significant managerial implications for mangers, manufacturers, and policy makers
A Comparative Study of Text Summarization on E-mail Data Using Unsupervised Learning Approaches
Over the last few years, email has met with enormous popularity. People send and receive a lot of messages every day, connect with colleagues and friends, share files and information. Unfortunately, the email overload outbreak has developed into a personal trouble for users as well as a financial concerns for businesses. Accessing an ever-increasing number of lengthy emails in the present generation has become a major concern for many users. Email text summarization is a promising approach to resolve this challenge. Email messages are general domain text, unstructured and not always well developed syntactically. Such elements introduce challenges for study in text processing, especially for the task of summarization. This research employs a quantitative and inductive methodologies to implement the Unsupervised learning models that addresses summarization task problem, to efficiently generate more precise summaries and to determine which approach of implementing Unsupervised clustering models outperform the best. The precision score from ROUGE-N metrics is used as the evaluation metrics in this research. This research evaluates the performance in terms of the precision score of four different approaches of text summarization by using various combinations of feature embedding technique like Word2Vec /BERT model and hybrid/conventional clustering algorithms. The results reveals that both the approaches of using Word2Vec and BERT feature embedding along with hybrid PHA-ClusteringGain k-Means algorithm achieved increase in the precision when compared with the conventional k-means clustering model. Among those hybrid approaches performed, the one using Word2Vec as feature embedding method attained 55.73% as maximum precision value
Unsupervised Opinion Summarisation in the Wasserstein Space
Opinion summarisation synthesises opinions expressed in a group of documents
discussing the same topic to produce a single summary. Recent work has looked
at opinion summarisation of clusters of social media posts. Such posts are
noisy and have unpredictable structure, posing additional challenges for the
construction of the summary distribution and the preservation of meaning
compared to online reviews, which has been so far the focus of opinion
summarisation. To address these challenges we present \textit{WassOS}, an
unsupervised abstractive summarization model which makes use of the Wasserstein
distance. A Variational Autoencoder is used to get the distribution of
documents/posts, and the distributions are disentangled into separate semantic
and syntactic spaces. The summary distribution is obtained using the
Wasserstein barycenter of the semantic and syntactic distributions. A latent
variable sampled from the summary distribution is fed into a GRU decoder with a
transformer layer to produce the final summary. Our experiments on multiple
datasets including Twitter clusters, Reddit threads, and reviews show that
WassOS almost always outperforms the state-of-the-art on ROUGE metrics and
consistently produces the best summaries with respect to meaning preservation
according to human evaluations
- …