72 research outputs found
BlogSum: A Query-based Summarization Approach to Make Sense of Social Media
With the rapid growth of the Social Web, a large amount of informal opinionated texts are available on numerous topics. However, people can be overwhelmed with this vast amount of information and they need help to find the information of their interests. Natural language tools for automatically analyzing these opinions become necessary to help individuals, organizations, and governments in making timely decisions. To address this need, I proposed a summarization approach for opinionated texts. To validate my approach, BlogSum is developed and evaluated experimentally using current benchmarks. Users can ask BlogSum any question (e.g. Why do people like Chrome better than Firefox?). To answer user's question, BlogSum first retrieves relevant blogs, reviews from the web then generates a concise summary that represents people opinions expressed towards the topic. Since blog summarization is a more recent endeavor, an error analysis was conducted by manually analyzing blog summaries to find there is any information processing difference needed for blogs compared to factual data. This analysis shows that question irrelevance and discourse incoherence, which decrease the overall quality of a summary and reduces the summary coherence, are two major issues for blog summaries. To address question irrelevance and discourse incoherence, in this work a domain-independent schema-based summarization approach is developed that utilizes discourse structures. This approach is based on the automatic identification of discourse relations within candidate sentences in order to instantiate the most appropriate discourse schema and filter and order candidate sentences in the most effective way. BlogSum also needs to deal with opinions, emotions effectively to be successful. BlogSum's overall performance as well as performance for question relevance and coherence was evaluated using various dataset. These results show that the proposed approach can effectively reduce question irrelevance and discourse incoherence and satisfy user's information need
Automatic Text Summarization
Writing text was one of the first ever methods used by humans to represent their knowledge.
Text can be of different types and have different purposes.
Due to the evolution of information systems and the Internet, the amount of textual information available has increased exponentially in a worldwide scale, and many documents tend
to have a percentage of unnecessary information. Due to this event, most readers have difficulty in digesting all the extensive information contained in multiple documents, produced
on a daily basis.
A simple solution to the excessive irrelevant information in texts is to create summaries, in
which we keep the subject’s related parts and remove the unnecessary ones.
In Natural Language Processing, the goal of automatic text summarization is to create systems that process text and keep only the most important data. Since its creation several
approaches have been designed to create better text summaries, which can be divided in two
separate groups: extractive approaches and abstractive approaches.
In the first group, the summarizers decide what text elements should be in the summary. The
criteria by which they are selected is diverse. After they are selected, they are combined into
the summary. In the second group, the text elements are generated from scratch. Abstractive
summarizers are much more complex so they still need a lot of research, in order to represent
good results.
During this thesis, we have investigated the state of the art approaches, implemented our
own versions and tested them in conventional datasets, like the DUC dataset.
Our first approach was a frequencybased approach, since it analyses the frequency in which
the text’s words/sentences appear in the text. Higher frequency words/sentences automatically receive higher scores which are then filtered with a compression rate and combined in
a summary.
Moving on to our second approach, we have improved the original TextRank algorithm by
combining it with word embedding vectors. The goal was to represent the text’s sentences as
nodes from a graph and with the help of word embeddings, determine how similar are pairs
of sentences and rank them by their similarity scores. The highest ranking sentences were
filtered with a compression rate and picked for the summary.
In the third approach, we combined feature analysis with deep learning. By analysing certain
characteristics of the text sentences, one can assign scores that represent the importance of
a given sentence for the summary. With these computed values, we have created a dataset
for training a deep neural network that is capable of deciding if a certain sentence must be
or not in the summary.
An abstractive encoderdecoder summarizer was created with the purpose of generating words
related to the document subject and combining them into a summary. Finally, every single
summarizer was combined into a full system.
Each one of our approaches was evaluated with several evaluation metrics, such as ROUGE.
We used the DUC dataset for this purpose and the results were fairly similar to the ones in
the scientific community. As for our encoderdecode, we got promising results.O texto é um dos utensílios mais importantes de transmissão de ideias entre os seres humanos. Pode ser de vários tipos e o seu conteúdo pode ser mais ou menos fácil de interpretar,
conforme a quantidade de informação relevante sobre o assunto principal.
De forma a facilitar o processamento pelo leitor existe um mecanismo propositadamente criado para reduzir a informação irrelevante num texto, chamado sumarização de texto. Através
da sumarização criamse versões reduzidas do text original e mantémse a informação do assunto principal.
Devido à criação e evolução da Internet e outros meios de comunicação, surgiu um aumento
exponencial de documentos textuais, evento denominado de sobrecarga de informação, que
têm na sua maioria informação desnecessária sobre o assunto que retratam.
De forma a resolver este problema global, surgiu dentro da área científica de Processamento
de Linguagem Natural, a sumarização automática de texto, que permite criar sumários automáticos de qualquer tipo de texto e de qualquer lingua, através de algoritmos computacionais.
Desde a sua criação, inúmeras técnicas de sumarização de texto foram idealizadas, podendo
ser classificadas em dois tipos diferentes: extractivas e abstractivas. Em técnicas extractivas,
são transcritos elementos do texto original, como palavras ou frases inteiras que sejam as
mais ilustrativas do assunto do texto e combinadas num documento. Em técnicas abstractivas, os algoritmos geram elementos novos.
Nesta dissertação pesquisaramse, implementaramse e combinaramse algumas das técnicas com melhores resultados de modo a criar um sistema completo para criar sumários.
Relativamente às técnicas implementadas, as primeiras três são técnicas extractivas enquanto
que a ultima é abstractiva. Desta forma, a primeira incide sobre o cálculo das frequências dos
elementos do texto, atribuindose valores às frases que sejam mais frequentes, que por sua
vez são escolhidas para o sumário através de uma taxa de compressão. Outra das técnicas
incide na representação dos elementos textuais sob a forma de nodos de um grafo, sendo
atribuidos valores de similaridade entre os mesmos e de seguida escolhidas as frases com
maiores valores através de uma taxa de compressão. Uma outra abordagem foi criada de
forma a combinar um mecanismo de análise das caracteristicas do texto com métodos baseados em inteligência artificial. Nela cada frase possui um conjunto de caracteristicas que são
usadas para treinar um modelo de rede neuronal. O modelo avalia e decide quais as frases
que devem pertencer ao sumário e filtra as mesmas através deu uma taxa de compressão.
Um sumarizador abstractivo foi criado para para gerar palavras sobre o assunto do texto e
combinar num sumário. Cada um destes sumarizadores foi combinado num só sistema. Por
fim, cada uma das técnicas pode ser avaliada segundo várias métricas de avaliação, como
por exemplo a ROUGE. Segundo os resultados de avaliação das técnicas, com o conjunto de
dados DUC, os nossos sumarizadores obtiveram resultados relativamente parecidos com os
presentes na comunidade cientifica, com especial atenção para o codificadordescodificador
que em certos casos apresentou resultados promissores
Text-image synergy for multimodal retrieval and annotation
Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.Text und Bild sind die beiden häufigsten Arten von Inhalten im Internet. Während es für Menschen einfach ist, gerade aus dem Zusammenspiel von Text- und Bildinhalten Informationen zu erfassen, stellt diese kombinierte Darstellung von Inhalten Softwaresysteme vor große Herausforderungen. In dieser Dissertation werden Probleme studiert, für deren Lösung das Verständnis des Zusammenspiels von Text- und Bildinhalten wesentlich ist. Es werden Methoden und Vorschläge präsentiert und empirisch bewertet, die semantische Verbindungen zwischen Text und Bild in multimodalen Daten herstellen. Wir stellen in dieser Dissertation vier miteinander verbundene Text- und Bildprobleme vor: • Bildersuche. Ob Bilder anhand von textbasierten Suchanfragen gefunden werden, hängt stark davon ab, ob der Text in der Nähe des Bildes mit dem der Anfrage übereinstimmt. Bilder ohne textuellen Kontext, oder sogar mit thematisch passendem Kontext, aber ohne direkte Übereinstimmungen der vorhandenen Schlagworte zur Suchanfrage, können häufig nicht gefunden werden. Zur Abhilfe schlagen wir vor, drei Arten von Informationen in Kombination zu nutzen: visuelle Informationen (in Form von automatisch generierten Bildbeschreibungen), textuelle Informationen (Stichworte aus vorangegangenen Suchanfragen), und Alltagswissen. • Verbesserte Bildbeschreibungen. Bei der Objekterkennung durch Computer Vision kommt es des Öfteren zu Fehldetektionen und Inkohärenzen. Die korrekte Identifikation von Bildinhalten ist jedoch eine wichtige Voraussetzung für die Suche nach Bildern mittels textueller Suchanfragen. Um die Fehleranfälligkeit bei der Objekterkennung zu minimieren, schlagen wir vor Alltagswissen einzubeziehen. Durch zusätzliche Bild-Annotationen, welche sich durch den gesunden Menschenverstand als thematisch passend erweisen, können viele fehlerhafte und zusammenhanglose Erkennungen vermieden werden. • Bild-Text Platzierung. Auf Internetseiten mit Text- und Bildinhalten (wie Nachrichtenseiten, Blogbeiträge, Artikel in sozialen Medien) werden Bilder in der Regel an semantisch sinnvollen Positionen im Textfluss platziert. Wir nutzen dies um ein Framework vorzuschlagen, in dem relevante Bilder ausgesucht werden und mit den passenden Abschnitten eines Textes assoziiert werden. • Bildunterschriften. Bilder, die als Teil von multimodalen Inhalten zur Verbesserung der Lesbarkeit von Texten dienen, haben typischerweise Bildunterschriften, die zum Kontext des umgebenden Texts passen. Wir schlagen vor, den Kontext beim automatischen Generieren von Bildunterschriften ebenfalls einzubeziehen. Üblicherweise werden hierfür die Bilder allein analysiert. Wir stellen die kontextbezogene Bildunterschriftengenerierung vor. Unsere vielversprechenden Beobachtungen und Ergebnisse eröffnen interessante Möglichkeiten für weitergehende Forschung zur computergestützten Erfassung des Zusammenspiels von Text- und Bildinhalten
Recommended from our members
Neural approaches to discourse coherence: modeling, evaluation and application
Discourse coherence is an important aspect of text quality that refers to the way different textual units relate to each other. In this thesis, I investigate neural approaches to modeling discourse coherence. I present a multi-task neural network where the main task is to predict a document-level coherence score and the secondary task is to learn word-level syntactic features. Additionally, I examine the effect of using contextualised word representations in single-task and multi-task setups. I evaluate my models on a synthetic dataset where incoherent documents are created by shuffling the sentence order in coherent original documents. The results show the efficacy of my multi-task learning approach, particularly when enhanced with contextualised embeddings, achieving new state-of-the-art results in ranking the coherent documents higher than the incoherent ones (96.9%). Furthermore, I apply my approach to the realistic domain of people’s everyday writing, such as emails and online posts, and further demonstrate its ability to capture various degrees of coherence. In order to further investigate the linguistic properties captured by coherence models, I create two datasets that exhibit syntactic and semantic alterations. Evaluating different models on these datasets reveals their ability to capture syntactic perturbations but their inadequacy to detect semantic changes. I find that semantic alterations are instead captured by models that first build sentence representations from averaged word embeddings, then apply a set of linear transformations over input sentence pairs. Finally, I present an application for coherence models in the pedagogical domain. I first demonstrate that state of-the-art neural approaches to automated essay scoring (AES) are not robust to adversarially created, grammatical, but incoherent sequences of sentences. Accordingly, I propose a framework for integrating and jointly training a coherence model with a state-of-the-art neural AES system in order to enhance its ability to detect such adversarial input. I show that this joint framework maintains a performance comparable to the state-of-the-art AES system in predicting a holistic essay score while significantly outperforming it in adversarial detection
Recommended from our members
Building Intelligent and Reliable Summarization Systems
Data, in various formats, surrounds us everywhere in our daily lives, such as education, entertainment, and media. Living in the era of big data, the massive amount of web textual data has grown exponentially over the past decade. This leads to the problem of information overload, where an individual is exposed to more information than they could process. Thus, the need for an automatic text summarization (ATS) system emerges, which could transform this vast raw information into key points in the form of smaller, digestible pieces automatically.ATS systems operate by extracting or generating a concise and readable summary while preserving salient information from the original documents. Developing intelligent systems that can produce concise, fluent, and reliable summaries has been a long-standing goal in natural language processing (NLP). Significant progress has been made in recent years, thanks to breakthroughs like pre-trained language models such as BERT and GPT. However, text summarization remains a complex and multifaceted task. Similar to the cognitive process humans undertake when crafting summaries, text summarization requires the machine to first semantically understand the contents of a document, then identify and extract salient information from the document, and finally generate an accurate and faithful summary.This dissertation presents several distinct approaches to tackle the three critical steps of building ATS systems. Specifically, I first present my work to improve the modeling of long documents for extractive summarization. I introduce model HEGEL, a hypergraph neural network for long document summarization that captures high-order cross-sentence relations. HEGEL updates and learns effective sentence representations with hypergraph transformer layers and fuses different types of sentence dependencies, including latent topics, keywords, coreference, and section structure. Extensive experiments on two benchmark datasets demonstrate the effectiveness and efficiency of HEGEL in long document modeling and extractive summarization.Then I move on to the holistic extraction of salient information from documents. To address the limitation of individual sentence label prediction in existing extractive summarization systems, I propose a novel paradigm for extractive summarization named DiffuSum. DiffuSum directly generates the desired summary sentence representations with diffusion models and extracts sentences based on sentence representation matching. Additionally, DiffuSum jointly optimizes a contrastive sentence encoder with a matching loss for sentence representation alignment and a multi-class contrastive loss for representation diversity. On the other hand, I also introduce a new holistic framework for unsupervised multi-document extractive summarization. The method incorporates the holistic beam search inference method associated with the holistic measurements, named Subset Representative Index (SRI). SRI balances the importance and diversity of a subset of sentences from the source documents and can be calculated in unsupervised and adaptive manners.Next, I demonstrate my work on improving the quality and faithfulness of generated summaries. While text summarization systems have made significant progress in recent years, they typically generate summaries in one single step. However, the one-shot summarization setting is sometimes inadequate, as the generated summary may contain hallucinations or overlook essential details related to the reader's interests. To address this, I propose SummIt, an iterative text summarization framework based on large language models (LLMs) like ChatGPT. SummIt enables the model to refine the generated summary iteratively through self-evaluation and feedback, resembling humans' iterative process when drafting and revising summaries. Furthermore, I explore the potential benefits of integrating knowledge and topic extractors into the framework to enhance summary faithfulness and controllability. Both automatic evaluation and human studies are conducted on three benchmark summarization datasets to validate the effectiveness of the iterative refinements and to identify potential issues of over-correction.Finally, as the emergence of large language models reshapes NLP research, I present a thorough evaluation of ChatGPT's performance on extractive summarization and compare it with traditional fine-tuning methods on various benchmark datasets. The experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. I also explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance and propose an extract-then-generate pipeline with ChatGPT, which could yield significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT's capabilities in faithful summarization using two-stage approaches.In summary, by demonstrating and examining these systems and solutions, I aim to highlight the three critical yet challenging steps in building intelligent and reliable summarization systems, which are also crucial steps towards advancing the design of a more powerful and trustworthy AI assistant. I hope future research endeavors will continue to advance along these directions
Automated Classification of Argument Stance in Student Essays: A Linguistically Motivated Approach with an Application for Supporting Argument Summarization
This study describes a set of document- and sentence-level classification models designed to automate the task of determining the argument stance (for or against) of a student argumentative essay and the task of identifying any arguments in the essay that provide reasons in support of that stance. A suggested application utilizing these models is presented which involves the automated extraction of a single-sentence summary of an argumentative essay. This summary sentence indicates the overall argument stance of the essay from which the sentence was extracted and provides a representative argument in support of that stance.
A novel set of document-level stance classification features motivated by linguistic research involving stancetaking language is described. Several document-level classification models incorporating these features are trained and tested on a corpus of student essays annotated for stance. These models achieve accuracies significantly above those of two baseline models. High-accuracy features used by these models include a dependency subtree feature incorporating information about the targets of any stancetaking language in the essay text and a feature capturing the semantic relationship between the essay prompt text and stancetaking language in the essay text.
We also describe the construction of a corpus of essay sentences annotated for supporting argument stance. The resulting corpus is used to train and test two sentence-level classification models. The first model is designed to classify a given sentence as a supporting argument or as not a supporting argument, while the second model is designed to classify a supporting argument as holding a for or against stance. Features motivated by influential linguistic analyses of the lexical, discourse, and rhetorical features of supporting arguments are used to build these two models, both of which achieve accuracies above their respective baseline models.
An application illustrating an interesting use-case for the models presented in this dissertation is described. This application incorporates all three classification models to extract a single sentence summarizing both the overall stance of a given text along with a convincing reason in support of that stance
- …