11 research outputs found
Influence of Pop-Up News Headlines in Internet-Enabled Devices on the Readership of the Stories among Youths in Owerri Metropolis, Nigeria
This research examined the influence of pop-up news headlines on the readership of the stories that they introduce among youths in Owerri metropolis. The objectives were to, among others, examine whether pop-up news headlines influence the readership of the stories among youths in Owerri metropolis, and identify what factor that is most responsible for determining the influence of pop-up news headlines on the readership of the stories among youths in Owerri metropolis. The uses and gratification theory was used to anchor the study, and the survey design was adopted. Data were gathered from 380 young users of internet-enabled devices in the Owerri metropolis. Findings indicate that pop-up news headlines are a common feature in online news distribution, as 100% of the respondents admitted to having seen pop-up news headlines on their internet-enabled devices; while pop-up news headlines influence readership of the stories among a majority (53%) of the young people, the said influence is limited as a result of several factors. Based on the findings, it was recommended that, among others, news organisations can increase the influence of the popup news headlines by introducing cash or point tokens for those who read the stories
Characterizing documents about colombian indigenous peoples using text analytics
The indigenous peoples of Colombia have a considerable
social, political and cultural wealth. However, issues such as the decadeslong armed conflict and drug trafficking have posed a significant threat to
their survival. In this work, publically available documents on the Internet
with information about two indigenous communities, the Aw´a and Inga
people from the Cauca region in southern Colombia, are analyzed using
automated text analytics approaches. A corpus is constructed comprising
general characterization documents, media articles and sentences from the
Constitutional Court. Topic analysis is carried out to identify the relevant
themes in the corpus to characterize each community. Sentiment analysis
carried out on the media articles indicates that the articles about the Inga
tend to be more positive and objective than the Aw´a. This may be
attributed to the significant impact that the armed conflict has had on
the Awa´ in recent years, and the productive projects of the Inga.
Furthermore, an approach for summarizing long, complex documents by
means of timelines is illustrated with a sentence issued by the
Constitutional Court. It is concluded that such an approach has significant
potential to facilitate understanding of documents of this nature
Explicit diversification of event aspects for temporal summarization
During major events, such as emergencies and disasters, a large volume of information is reported on newswire and social media platforms. Temporal summarization (TS) approaches are used to automatically produce concise overviews of such events by extracting text snippets from related articles over time. Current TS approaches rely on a combination of event relevance and textual novelty for snippet selection. However, for events that span multiple days, textual novelty is often a poor criterion for selecting snippets, since many snippets are textually unique but are semantically redundant or non-informative. In this article, we propose a framework for the diversification of snippets using explicit event aspects, building on recent works in search result diversification. In particular, we first propose two techniques to identify explicit aspects that a user might want to see covered in a summary for different types of event. We then extend a state-of-the-art explicit diversification framework to maximize the coverage of these aspects when selecting summary snippets for unseen events. Through experimentation over the TREC TS 2013, 2014, and 2015 datasets, we show that explicit diversification for temporal summarization significantly outperforms classical novelty-based diversification, as the use of explicit event aspects reduces the amount of redundant and off-topic snippets returned, while also increasing summary timeliness
INFLUENCE OF POP-UP NEWS HEADLINES IN INTERNET-ENABLED DEVICES ON THE READERSHIP OF THE STORIES AMONG YOUTHS IN OWERRI METROPOLIS, NIGERIA
This research examined the influence of pop-up news headlines on the readership of the stories that they introduce among youths in Owerri metropolis. The objectives were to, among others, examine whether pop-up news headlines influence the readership of the stories among youths in Owerri metropolis, and identify what factor that is most responsible for determining the influence of pop-up news headlines on the readership of the stories among youths in Owerri metropolis. The uses and gratification theory was
used to anchor the study, and the survey design was adopted. Data were gathered from 380 young users of internet-enabled devices in the Owerri metropolis. Findings indicate that pop-up news headlines are a common feature in online news distribution, as 100% of the respondents admitted to having seen pop-up news headlines on their internet-enabled devices; while pop-up news headlines influence readership of the stories among a majority (53%) of the young people, the said influence is limited as a result of several factors. Based on the findings, it was recommended that, among others, news organisations can increase the influence of the popup news headlines by introducing cash or point tokens for those who read the storie
A Survey on Event-based News Narrative Extraction
Narratives are fundamental to our understanding of the world, providing us
with a natural structure for knowledge representation over time. Computational
narrative extraction is a subfield of artificial intelligence that makes heavy
use of information retrieval and natural language processing techniques.
Despite the importance of computational narrative extraction, relatively little
scholarly work exists on synthesizing previous research and strategizing future
research in the area. In particular, this article focuses on extracting news
narratives from an event-centric perspective. Extracting narratives from news
data has multiple applications in understanding the evolving information
landscape. This survey presents an extensive study of research in the area of
event-based news narrative extraction. In particular, we screened over 900
articles that yielded 54 relevant articles. These articles are synthesized and
organized by representation model, extraction criteria, and evaluation
approaches. Based on the reviewed studies, we identify recent trends, open
challenges, and potential research lines.Comment: 37 pages, 3 figures, to be published in the journal ACM CSU
Timeline Summarization from Relevant Headlines
Abstract. Timeline summaries are an effective way for helping newspaper read-ers to keep track of long-lasting news stories, such as the Egypt revolution. A good timeline summary provides a concise description of only the main events, while maintaining good understandability. As manual construction of timelines is very time-consuming, there is a need for automatic approaches. However, auto-matic selection of relevant events is challenging due to the large amount of news articles published every day. Furthermore, current state-of-the-art systems pro-duce summaries that are suboptimal in terms of relevance and understandability. We present a new approach that exploits the headlines of online news articles instead of the articles ’ full text. The quantitative and qualitative results from our user studies confirm that our method outperforms state-of-the-art system in these aspects.
Towards More Human-Like Text Summarization: Story Abstraction Using Discourse Structure and Semantic Information.
PhD ThesisWith the massive amount of textual data being produced every day,
the ability to effectively summarise text documents is becoming increasingly
important. Automatic text summarization entails the selection
and generalisation of the most salient points of a text in order
to produce a summary. Approaches to automatic text summarization
can fall into one of two categories: abstractive or extractive approaches.
Extractive approaches involve the selection and concatenation
of spans of text from a given document. Research in automatic
text summarization began with extractive approaches, scoring and
selecting sentences based on the frequency and proximity of words.
In contrast, abstractive approaches are based on a process of interpretation,
semantic representation, and generalisation. This is closer
to the processes that psycholinguistics tells us that humans perform
when reading, remembering and summarizing. However in the sixty
years since its inception, the field has largely remained focused on
extractive approaches.
This thesis aims to answer the following questions. Does knowledge
about the discourse structure of a text aid the recognition of
summary-worthy content? If so, which specific aspects of discourse
structure provide the greatest benefit? Can this structural information
be used to produce abstractive summaries, and are these more
informative than extractive summaries? To thoroughly examine these
questions, they are each considered in isolation, and as a whole, on
the basis of both manual and automatic annotations of texts. Manual
annotations facilitate an investigation into the upper bounds of
what can be achieved by the approach described in this thesis. Results
based on automatic annotations show how this same approach
is impacted by the current performance of imperfect preprocessing
steps, and indicate its feasibility.
Extractive approaches to summarization are intrinsically limited
by the surface text of the input document, in terms of both content
selection and summary generation. Beginning with a motivation
for moving away from these commonly used methods of producing
summaries, I set out my methodology for a more human-like
approach to automatic summarization which examines the benefits of
using discourse-structural information. The potential benefit of this
is twofold: moving away from a reliance on the wording of a text
in order to detect important content, and generating concise summaries
that are independent of the input text. The importance of
discourse structure to signal key textual material has previously been
recognised, however it has seen little applied use in the field of autovii
matic summarization. A consideration of evaluation metrics also features
significantly in the proposed methodology. These play a role in
both preprocessing steps and in the evaluation of the final summary
product. I provide evidence which indicates a disparity between the
performance of coreference resolution systems as indicated by their
standard evaluation metrics, and their performance in extrinsic tasks.
Additionally, I point out a range of problems for the most commonly
used metric, ROUGE, and suggest that at present summary evaluation
should not be automated.
To illustrate the general solutions proposed to the questions raised
in this thesis, I use Russian Folk Tales as an example domain. This
genre of text has been studied in depth and, most importantly, it has a
rich narrative structure that has been recorded in detail. The rules of
this formalism are suitable for the narrative structure reasoning system
presented as part of this thesis. The specific discourse-structural elements
considered cover the narrative structure of a text, coreference
information, and the story-roles fulfilled by different characters.
The proposed narrative structure reasoning system produces highlevel
interpretations of a text according to the rules of a given formalism.
For the example domain of Russian Folktales, a system is implemented
which constructs such interpretations of a tale according to
an existing set of rules and restrictions. I discuss how this process of
detecting narrative structure can be transferred to other genres, and
a key factor in the success of this process: how constrained are the
rules of the formalism. The system enumerates all possible interpretations
according to a set of constraints, meaning a less restricted rule
set leads to a greater number of interpretations.
For the example domain, sentence level discourse-structural annotations
are then used to predict summary-worthy content. The results
of this study are analysed in three parts. First, I examine the relative
utility of individual discourse features and provide a qualitative
discussion of these results. Second, the predictive abilities of these
features are compared when they are manually annotated to when
they are annotated with varying degrees of automation. Third, these
results are compared to the predictive capabilities of classic extractive
algorithms. I show that discourse features can be used to more
accurately predict summary-worthy content than classic extractive algorithms.
This holds true for automatically obtained annotations, but
with a much clearer difference when using manual annotations.
The classifiers learned in the prediction of summary-worthy sentences
are subsequently used to inform the production of both extractive
and abstractive summaries to a given length. A human-based
evaluation is used to compare these summaries, as well as the outputs
of a classic extractive summarizer. I analyse the impact of knowledge
about discourse structure, obtained both manually and automatically,
on summary production. This allows for some insight into the knock
on effects on summary production that can occur from inaccurate discourse
information (narrative structure and coreference information).
My analyses show that even given inaccurate discourse information,
the resulting abstractive summaries are considered more informative
than their extractive counterparts. With human-level knowledge
about discourse structure, these results are even clearer.
In conclusion, this research provides a framework which can be
used to detect the narrative structure of a text, and shows its potential
to provide a more human-like approach to automatic summarization.
I show the limit of what is achievable with this approach both
when manual annotations are obtainable, and when only automatic
annotations are feasible. Nevertheless, this thesis supports the suggestion
that the future of summarization lies with abstractive and not
extractive techniques