1,647 research outputs found
Automatic Discharge Summary Generation using Neural Network Models
東京都立大学Tokyo Metropolitan University博士(情報科学)doctoral thesi
HumSet: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crisis Response
Timely and effective response to humanitarian crises requires quick and
accurate analysis of large amounts of text data - a process that can highly
benefit from expert-assisted NLP systems trained on validated and annotated
data in the humanitarian response domain. To enable creation of such NLP
systems, we introduce and release HumSet, a novel and rich multilingual dataset
of humanitarian response documents annotated by experts in the humanitarian
response community. The dataset provides documents in three languages (English,
French, Spanish) and covers a variety of humanitarian crises from 2018 to 2021
across the globe. For each document, HUMSET provides selected snippets
(entries) as well as assigned classes to each entry annotated using common
humanitarian information analysis frameworks. HUMSET also provides novel and
challenging entry extraction and multi-label entry classification tasks. In
this paper, we take a first step towards approaching these tasks and conduct a
set of experiments on Pre-trained Language Models (PLM) to establish strong
baselines for future research in this domain. The dataset is available at
https://blog.thedeep.io/humset/.Comment: Published at Findings of EMNLP 202
ConceptEVA: Concept-Based Interactive Exploration and Customization of Document Summaries
With the most advanced natural language processing and artificial
intelligence approaches, effective summarization of long and multi-topic
documents -- such as academic papers -- for readers from different domains
still remains a challenge. To address this, we introduce ConceptEVA, a
mixed-initiative approach to generate, evaluate, and customize summaries for
long and multi-topic documents. ConceptEVA incorporates a custom multi-task
longformer encoder decoder to summarize longer documents. Interactive
visualizations of document concepts as a network reflecting both semantic
relatedness and co-occurrence help users focus on concepts of interest. The
user can select these concepts and automatically update the summary to
emphasize them. We present two iterations of ConceptEVA evaluated through an
expert review and a within-subjects study. We find that participants'
satisfaction with customized summaries through ConceptEVA is higher than their
own manually-generated summary, while incorporating critique into the summaries
proved challenging. Based on our findings, we make recommendations for
designing summarization systems incorporating mixed-initiative interactions.Comment: 16 pages, 7 figure
A framework for the Comparative analysis of text summarization techniques
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceWe see that with the boom of information technology and IOT (Internet of things), the size of information which is basically data is increasing at an alarming rate. This information can always be harnessed and if channeled into the right direction, we can always find meaningful information. But the problem is this data is not always numerical and there would be problems where the data would be completely textual, and some meaning has to be derived from it. If one would have to go through these texts manually, it would take hours or even days to get a concise and meaningful information out of the text. This is where a need for an automatic summarizer arises easing manual intervention, reducing time and cost but at the same time retaining the key information held by these texts. In the recent years, new methods and approaches have been developed which would help us to do so. These approaches are implemented in lot of domains, for example, Search engines provide snippets as document previews, while news websites produce shortened descriptions of news subjects, usually as headlines, to make surfing easier.
Broadly speaking, there are mainly two ways of text summarization – extractive and abstractive summarization. Extractive summarization is the approach in which important sections of the whole text are filtered out to form the condensed form of the text. While the abstractive summarization is the approach in which the text as a whole is interpreted and examined and after discerning the meaning of the text, sentences are generated by the model itself describing the important points in a concise way
Assessment, Implication, and Analysis of Online Consumer Reviews: A Literature Review
The onset of e-marketplace, virtual communities and social networking has appreciated the influential capability of online consumer reviews (OCR) and therefore necessitate conglomeration of the body of knowledge. This article attempts to conceptually cluster academic literature in both management and technical domain. The study follows a framework which broadly clusters management research under two heads: OCR Assessment and OCR Implication (business implication). Parallel technical literature has been reviewed to reconcile methodologies adopted in the analysis of text content on the web, majorly reviews. Text mining through automated tools, algorithmic contribution (dominant majorly in technical stream literature) and manual assessment (derived from the stream of content analysis) has been studied in this review article. Literature survey of both the domains is analyzed to propose possible area for further research. Usage of text analysis methods along with statistical and data mining techniques to analyze review text and utilize the knowledge creation for solving managerial issues can possibly constitute further work.
Available at: https://aisel.aisnet.org/pajais/vol9/iss2/4
Recommended from our members
Adapting Automatic Summarization to New Sources of Information
English-language news articles are no longer necessarily the best source of information. The Web allows information to spread more quickly and travel farther: first-person accounts of breaking news events pop up on social media, and foreign-language news articles are accessible to, if not immediately understandable by, English-speaking users. This thesis focuses on developing automatic summarization techniques for these new sources of information.
We focus on summarizing two specific new sources of information: personal narratives, first-person accounts of exciting or unusual events that are readily found in blog entries and other social media posts, and non-English documents, which must first be translated into English, often introducing translation errors that complicate the summarization process. Personal narratives are a very new area of interest in natural language processing research, and they present two key challenges for summarization. First, unlike many news articles, whose lead sentences serve as summaries of the most important ideas in the articles, personal narratives provide no such shortcuts for determining where important information occurs in within them; second, personal narratives are written informally and colloquially, and unlike news articles, they are rarely edited, so they require heavier editing and rewriting during the summarization process. Non-English documents, whether news or narrative, present yet another source of difficulty on top of any challenges inherent to their genre: they must be translated into English, potentially introducing translation errors and disfluencies that must be identified and corrected during summarization.
The bulk of this thesis is dedicated to addressing the challenges of summarizing personal narratives found on the Web. We develop a two-stage summarization system for personal narrative that first extracts sentences containing important content and then rewrites those sentences into summary-appropriate forms. Our content extraction system is inspired by contextualist narrative theory, using changes in writing style throughout a narrative to detect sentences containing important information; it outperforms both graph-based and neural network approaches to sentence extraction for this genre. Our paraphrasing system rewrites the extracted sentences into shorter, standalone summary sentences, learning to mimic the paraphrasing choices of human summarizers more closely than can traditional lexicon- or translation-based paraphrasing approaches.
We conclude with a chapter dedicated to summarizing non-English documents written in low-resource languages – documents that would otherwise be unreadable for English-speaking users. We develop a cross-lingual summarization system that performs even heavier editing and rewriting than does our personal narrative paraphrasing system; we create and train on large amounts of synthetic errorful translations of foreign-language documents. Our approach produces fluent English summaries from disdisfluent translations of non-English documents, and it generalizes across languages
A semi-automatic annotation methodology that combines Summarization and Human-In-The-Loop to create disinformation detection resources
Early detection of disinformation is one of the most challenging big-scale problems facing present day society. This is why the application of technologies such as Artificial Intelligence and Natural Language Processing is necessary. The vast majority of Artificial Intelligence approaches require annotated data, and generating these resources is very expensive. This proposal aims to improve the efficiency of the annotation process with a two-level semi-automatic annotation methodology. The first level extracts relevant information through summarization techniques. The second applies a Human-in-the-Loop strategy whereby the labels are pre-annotated by the machine, corrected by the human and reused by the machine to retrain the automatic annotator. After evaluating the system, the average annotation time per news item is reduced by 50%. In addition, a set of experiments on the semi-automatically annotated dataset that is generated are performed so as to demonstrate the effectiveness of the proposal. Although the dataset is annotated in terms of unreliable content, it is applied to the veracity detection task with very promising results (0.95 accuracy in reliability detection and 0.78 in veracity detection).This research work is funded by MCIN/AEI/ 10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe”, by the “European Union” or by the “European Union NextGenerationEU/PRTR” through the project TRIVIAL: Technological Resources for Intelligent VIral AnaLysis through NLP (PID2021-122263OB-C22) and the project SOCIALTRUST: Assessing trustworthiness in digital media (PDC2022-133146-C22). Also funded by Generalitat Valenciana through the project NL4DISMIS: Natural Language Technologies for dealing with dis- and misinformation (CIPROM/ 2021/21), and the grant ACIF/2020/177
In Search of Meaning:Lessons, Resources and Next Steps for Computational Analysis of Financial Discourse
We critically assess mainstream accounting and finance research applying methods from computational linguistics (CL) to study financial discourse. We also review common themes and innovations in the literature and assess the incremental contributions of work applying CL methods over manual content analysis. Key conclusions emerging from our analysis are: (a) accounting and finance research is behind the curve in terms of CL methods generally and word sense disambiguation in particular; (b) implementation issues mean the proposed benefits of CL are often less pronounced than proponents suggest; (c) structural issues limit practical relevance; and (d) CL methods and high quality manual analysis represent complementary approaches to analyzing financial discourse. We describe four CL tools that have yet to gain traction in mainstream AF research but which we believe offer promising ways to enhance the study of meaning in financial discourse. The four approaches are named entity recognition, summarization, semantics and corpus linguistics
- …