797 research outputs found
A survey on opinion summarization technique s for social media
The volume of data on the social media is huge and even keeps increasing. The need for efficient processing of this extensive information resulted in increasing research interest in knowledge engineering tasks such as Opinion Summarization. This survey shows the current opinion summarization challenges for social media, then the necessary pre-summarization steps like preprocessing, features extraction, noise elimination, and handling of synonym features. Next, it covers the various approaches used in opinion summarization like Visualization, Abstractive, Aspect based, Query-focused, Real Time, Update Summarization, and highlight other Opinion Summarization approaches such as Contrastive, Concept-based, Community Detection, Domain Specific, Bilingual, Social Bookmarking, and Social Media Sampling. It covers the different datasets used in opinion summarization and future work suggested in each technique. Finally, it provides different ways for evaluating opinion summarization
Deep Recurrent Generative Decoder for Abstractive Text Summarization
We propose a new framework for abstractive text summarization based on a
sequence-to-sequence oriented encoder-decoder model equipped with a deep
recurrent generative decoder (DRGN).
Latent structure information implied in the target summaries is learned based
on a recurrent latent random model for improving the summarization quality.
Neural variational inference is employed to address the intractable posterior
inference for the recurrent latent variables.
Abstractive summaries are generated based on both the generative latent
variables and the discriminative deterministic states.
Extensive experiments on some benchmark datasets in different languages show
that DRGN achieves improvements over the state-of-the-art methods.Comment: 10 pages, EMNLP 201
Automatic Multiple Document Text Summarization using Wordnet and Agility Tool
The number of web pages on the World Wide Web is increasing very rapidly. Consequently, search engines like Google, AltaVista, Bing etc. provides a long list of URLs to the end user. So, it becomes very difficult to review and analyze each web page manually. That2019;s why automatic text sumarization is used to summarize the source text into its shorter version by preserving its information content and overall meaning. This paper proposes an automatic multiple documents text summarization technique called AMDTSWA, which allows the end user to select multiple URLs to generate their summarized results in parallel. AMDTSWA makes the use of concept based segmentation, HTML DOM tree and concept blocks formation. Similarities of contents are determined by calculating the sentence score and useful information is extracted for generating a comparative summary. The proposed approach is implemented by using ASP.Net and gives good results
A reinforcement learning formulation to the complex question answering problem
International audienceWe use extractive multi-document summarization techniques to perform complex question answering and formulate it as a reinforcement learning problem. Given a set of complex questions, a list of relevant documents per question, and the corresponding human generated summaries (i.e. answers to the questions) as training data, the reinforcement learning module iteratively learns a number of feature weights in order to facilitate the automatic generation of summaries i.e. answers to previously unseen complex questions. A reward function is used to measure the similarities between the candidate (machine generated) summary sentences and the abstract summaries. In the training stage, the learner iteratively selects the important document sentences to be included in the candidate summary, analyzes the reward function and updates the related feature weights accordingly. The final weights are used to generate summaries as answers to unseen complex questions in the testing stage. Evaluation results show the effectiveness of our system. We also incorporate user interaction into the reinforcement learner to guide the candidate summary sentence selection process. Experiments reveal the positive impact of the user interaction component on the reinforcement learning framework
Evaluating Centering for Information Ordering Using Corpora
In this article we discuss several metrics of coherence defined using centering theory and investigate the usefulness of such metrics for information ordering in automatic text generation. We estimate empirically which is the most promising metric and how useful this metric is using a general methodology applied on several corpora. Our main result is that the simplest metric (which relies exclusively on NOCB transitions) sets a robust baseline that cannot be outperformed by other metrics which make use of additional centering-based features. This baseline can be used for the development of both text-to-text and concept-to-text generation systems. </jats:p
Improvements to the complex question answering models
x, 128 leaves : ill. ; 29 cmIn recent years the amount of information on the web has increased dramatically. As a
result, it has become a challenge for the researchers to find effective ways that can help us
query and extract meaning from these large repositories. Standard document search engines
try to address the problem by presenting the users a ranked list of relevant documents. In
most cases, this is not enough as the end-user has to go through the entire document to find
out the answer he is looking for. Question answering, which is the retrieving of answers
to natural language questions from a document collection, tries to remove the onus on the
end-user by providing direct access to relevant information.
This thesis is concerned with open-domain complex question answering. Unlike simple
questions, complex questions cannot be answered easily as they often require inferencing
and synthesizing information from multiple documents. Hence, we considered the task
of complex question answering as query-focused multi-document summarization. In this
thesis, to improve complex question answering we experimented with both empirical and
machine learning approaches. We extracted several features of different types (i.e. lexical,
lexical semantic, syntactic and semantic) for each of the sentences in the document
collection in order to measure its relevancy to the user query.
We have formulated the task of complex question answering using reinforcement framework,
which to our best knowledge has not been applied for this task before and has the
potential to improve itself by fine-tuning the feature weights from user feedback. We have
also used unsupervised machine learning techniques (random walk, manifold ranking) and
augmented semantic and syntactic information to improve them. Finally we experimented
with question decomposition where instead of trying to find the answer of the complex
question directly, we decomposed the complex question into a set of simple questions and
synthesized the answers to get our final result
- âŠ