6,413 research outputs found
Foreground and background text in retrieval
Our hypothesis is that certain clauses have foreground functions in text,
while other clauses have background functions and that these functions are
expressed or reflected in the syntactic structure of the clause.
Presumably these clauses will have differing utility for automatic
approaches to text understanding; a summarization system might want to
utilize background clauses to capture commonalities between numbers of
documents while an indexing system might use foreground clauses in order to
capture specific characteristics of a certain document
On the Application of Generic Summarization Algorithms to Music
Several generic summarization algorithms were developed in the past and
successfully applied in fields such as text and speech summarization. In this
paper, we review and apply these algorithms to music. To evaluate this
summarization's performance, we adopt an extrinsic approach: we compare a Fado
Genre Classifier's performance using truncated contiguous clips against the
summaries extracted with those algorithms on 2 different datasets. We show that
Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA)
all improve classification performance in both datasets used for testing.Comment: 12 pages, 1 table; Submitted to IEEE Signal Processing Letter
SeLeCT: a lexical cohesion based news story segmentation system
In this paper we compare the performance of three distinct approaches to lexical cohesion based text segmentation. Most work in this area has focused on the discovery of textual units that discuss subtopic structure within documents. In contrast our segmentation task requires the discovery of topical units of text i.e., distinct news stories from broadcast news programmes. Our approach to news story segmentation (the SeLeCT system) is based on an analysis of lexical cohesive strength between textual units using a linguistic technique called lexical chaining. We evaluate the relative performance of SeLeCT with respect to two other cohesion based segmenters: TextTiling and C99. Using a recently introduced evaluation metric WindowDiff, we contrast the segmentation accuracy of each system on both "spoken" (CNN news transcripts) and "written" (Reuters newswire) news story test sets extracted from the TDT1 corpus
Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling
Topic segmentation is critical for obtaining structured documents and
improving downstream tasks such as information retrieval. Due to its ability of
automatically exploring clues of topic shift from abundant labeled data, recent
supervised neural models have greatly promoted the development of long document
topic segmentation, but leaving the deeper relationship between coherence and
topic segmentation underexplored. Therefore, this paper enhances the ability of
supervised models to capture coherence from both logical structure and semantic
similarity perspectives to further improve the topic segmentation performance,
proposing Topic-aware Sentence Structure Prediction (TSSP) and Contrastive
Semantic Similarity Learning (CSSL). Specifically, the TSSP task is proposed to
force the model to comprehend structural information by learning the original
relations between adjacent sentences in a disarrayed document, which is
constructed by jointly disrupting the original document at topic and sentence
levels. Moreover, we utilize inter- and intra-topic information to construct
contrastive samples and design the CSSL objective to ensure that the sentences
representations in the same topic have higher similarity, while those in
different topics are less similar. Extensive experiments show that the
Longformer with our approach significantly outperforms old state-of-the-art
(SOTA) methods. Our approach improve of old SOTA by 3.42 (73.74 -> 77.16)
and reduces by 1.11 points (15.0 -> 13.89) on WIKI-727K and achieves an
average relative reduction of 4.3% on on WikiSection. The average
relative drop of 8.38% on two out-of-domain datasets also demonstrates
the robustness of our approach.Comment: Accepted by EMNLP 2023. Codes is available at
https://github.com/alibaba-damo-academy/SpokenNLP
- …