6 research outputs found
Guess who? Multilingual approach for the automated generation of author-stylized poetry
This paper addresses the problem of stylized text generation in a
multilingual setup. A version of a language model based on a long short-term
memory (LSTM) artificial neural network with extended phonetic and semantic
embeddings is used for stylized poetry generation. The quality of the resulting
poems generated by the network is estimated through bilingual evaluation
understudy (BLEU), a survey and a new cross-entropy based metric that is
suggested for the problems of such type. The experiments show that the proposed
model consistently outperforms random sample and vanilla-LSTM baselines, humans
also tend to associate machine generated texts with the target author
Paranoid Transformer: Reading Narrative of Madness as Computational Approach to Creativity
This papers revisits the receptive theory in context of computational
creativity. It presents a case study of a Paranoid Transformer - a fully
autonomous text generation engine with raw output that could be read as the
narrative of a mad digital persona without any additional human post-filtering.
We describe technical details of the generative system, provide examples of
output and discuss the impact of receptive theory, chance discovery and
simulation of fringe mental state on the understanding of computational
creativity
Adapting Language Models for Non-Parallel Author-Stylized Rewriting
Given the recent progress in language modeling using Transformer-based neural
models and an active interest in generating stylized text, we present an
approach to leverage the generalization capabilities of a language model to
rewrite an input text in a target author's style. Our proposed approach adapts
a pre-trained language model to generate author-stylized text by fine-tuning on
the author-specific corpus using a denoising autoencoder (DAE) loss in a
cascaded encoder-decoder framework. Optimizing over DAE loss allows our model
to learn the nuances of an author's style without relying on parallel data,
which has been a severe limitation of the previous related works in this space.
To evaluate the efficacy of our approach, we propose a linguistically-motivated
framework to quantify stylistic alignment of the generated text to the target
author at lexical, syntactic and surface levels. The evaluation framework is
both interpretable as it leads to several insights about the model, and
self-contained as it does not rely on external classifiers, e.g. sentiment or
formality classifiers. Qualitative and quantitative assessment indicates that
the proposed approach rewrites the input text with better alignment to the
target style while preserving the original content better than state-of-the-art
baselines.Comment: Accepted for publication in Main Technical Track at AAAI 2
The AI Ghostwriter Effect: When Users Do Not Perceive Ownership of AI-Generated Text But Self-Declare as Authors
Human-AI interaction in text production increases complexity in authorship.
In two empirical studies (n1 = 30 & n2 = 96), we investigate authorship and
ownership in human-AI collaboration for personalized language generation. We
show an AI Ghostwriter Effect: Users do not consider themselves the owners and
authors of AI-generated text but refrain from publicly declaring AI authorship.
Personalization of AI-generated texts did not impact the AI Ghostwriter Effect,
and higher levels of participants' influence on texts increased their sense of
ownership. Participants were more likely to attribute ownership to supposedly
human ghostwriters than AI ghostwriters, resulting in a higher
ownership-authorship discrepancy for human ghostwriters. Rationalizations for
authorship in AI ghostwriters and human ghostwriters were similar. We discuss
how our findings relate to psychological ownership and human-AI interaction to
lay the foundations for adapting authorship frameworks and user interfaces in
AI in text-generation tasks.Comment: Pre-print; currently under revie