4,232 research outputs found
Automatic Discharge Summary Generation using Neural Network Models
東京都立大学Tokyo Metropolitan University博士(情報科学)doctoral thesi
Exploring Optimal Granularity for Extractive Summarization of Unstructured Health Records: Analysis of the Largest Multi-Institutional Archive of Health Records in Japan
Automated summarization of clinical texts can reduce the burden of medical
professionals. "Discharge summaries" are one promising application of the
summarization, because they can be generated from daily inpatient records. Our
preliminary experiment suggests that 20-31% of the descriptions in discharge
summaries overlap with the content of the inpatient records. However, it
remains unclear how the summaries should be generated from the unstructured
source. To decompose the physician's summarization process, this study aimed to
identify the optimal granularity in summarization. We first defined three types
of summarization units with different granularities to compare the performance
of the discharge summary generation: whole sentences, clinical segments, and
clauses. We defined clinical segments in this study, aiming to express the
smallest medically meaningful concepts. To obtain the clinical segments, it was
necessary to automatically split the texts in the first stage of the pipeline.
Accordingly, we compared rule-based methods and a machine learning method, and
the latter outperformed the formers with an F1 score of 0.846 in the splitting
task. Next, we experimentally measured the accuracy of extractive summarization
using the three types of units, based on the ROUGE-1 metric, on a
multi-institutional national archive of health records in Japan. The measured
accuracies of extractive summarization using whole sentences, clinical
segments, and clauses were 31.91, 36.15, and 25.18, respectively. We found that
the clinical segments yielded higher accuracy than sentences and clauses. This
result indicates that summarization of inpatient records demands finer
granularity than sentence-oriented processing. Although we used only Japanese
health records, it can be interpreted as follows: physicians extract "concepts
of medical significance" from patient records and recombine them ..
A Survey on Biomedical Text Summarization with Pre-trained Language Model
The exponential growth of biomedical texts such as biomedical literature and
electronic health records (EHRs), provides a big challenge for clinicians and
researchers to access clinical information efficiently. To address the problem,
biomedical text summarization has been proposed to support clinical information
retrieval and management, aiming at generating concise summaries that distill
key information from single or multiple biomedical documents. In recent years,
pre-trained language models (PLMs) have been the de facto standard of various
natural language processing tasks in the general domain. Most recently, PLMs
have been further investigated in the biomedical field and brought new insights
into the biomedical text summarization task. In this paper, we systematically
summarize recent advances that explore PLMs for biomedical text summarization,
to help understand recent progress, challenges, and future directions. We
categorize PLMs-based approaches according to how they utilize PLMs and what
PLMs they use. We then review available datasets, recent approaches and
evaluation metrics of the task. We finally discuss existing challenges and
promising future directions. To facilitate the research community, we line up
open resources including available datasets, recent approaches, codes,
evaluation metrics, and the leaderboard in a public project:
https://github.com/KenZLuo/Biomedical-Text-Summarization-Survey/tree/master.Comment: 19 pages, 6 figures, TKDE under revie
- …