4 research outputs found
Importance-Aware Learning for Neural Headline Editing
Many social media news writers are not professionally trained. Therefore,
social media platforms have to hire professional editors to adjust amateur
headlines to attract more readers. We propose to automate this headline editing
process through neural network models to provide more immediate writing support
for these social media news writers. To train such a neural headline editing
model, we collected a dataset which contains articles with original headlines
and professionally edited headlines. However, it is expensive to collect a
large number of professionally edited headlines. To solve this low-resource
problem, we design an encoder-decoder model which leverages large scale
pre-trained language models. We further improve the pre-trained model's quality
by introducing a headline generation task as an intermediate task before the
headline editing task. Also, we propose Self Importance-Aware (SIA) loss to
address the different levels of editing in the dataset by down-weighting the
importance of easily classified tokens and sentences. With the help of
Pre-training, Adaptation, and SIA, the model learns to generate headlines in
the professional editor's style. Experimental results show that our method
significantly improves the quality of headline editing comparing against
previous methods.Comment: AAAI 202
Let AI Entertain You: Increasing User Engagement with Generative AI and Rejection Sampling
While generative AI excels in content generation, it does not always increase
user engagement. This can be attributed to two main factors. First, generative
AI generates content without incorporating explicit or implicit feedback about
user interactions. Even if the generated content seems to be more informative
or well-written, it does not necessarily lead to an increase in user
activities, such as clicks. Second, there is a concern with the quality of the
content generative AI produces, which often lacks the distinctiveness and
authenticity that human-created content possesses. These two factors can lead
to content that fails to meet specific needs and preferences of users,
ultimately reducing its potential to be engaging.
This paper presents a generic framework of how to improve user engagement
with generative AI by leveraging user feedback. Our solutions employ rejection
sampling, a technique used in reinforcement learning, to boost engagement
metrics. We leveraged the framework in the context of email notification
subject lines generation for an online social network, and achieved significant
engagement metric lift including +1% Session and +0.4% Weekly Active Users. We
believe our work offers a universal framework that enhances user engagement
with generative AI, particularly when standard generative AI reaches its limits
in terms of enhancing content to be more captivating. To the best of our
knowledge, this represents an early milestone in the industry's successful use
of generative AI to enhance user engagement