975 research outputs found
Controllable Neural Story Plot Generation via Reinforcement Learning
Language-modeling--based approaches to story plot generation attempt to
construct a plot by sampling from a language model (LM) to predict the next
character, word, or sentence to add to the story. LM techniques lack the
ability to receive guidance from the user to achieve a specific goal, resulting
in stories that don't have a clear sense of progression and lack coherence. We
present a reward-shaping technique that analyzes a story corpus and produces
intermediate rewards that are backpropagated into a pre-trained LM in order to
guide the model towards a given goal. Automated evaluations show our technique
can create a model that generates story plots which consistently achieve a
specified goal. Human-subject studies show that the generated stories have more
plausible event ordering than baseline plot generation techniques.Comment: Published in IJCAI 201
Are NLP Models Good at Tracing Thoughts: An Overview of Narrative Understanding
Narrative understanding involves capturing the author's cognitive processes,
providing insights into their knowledge, intentions, beliefs, and desires.
Although large language models (LLMs) excel in generating grammatically
coherent text, their ability to comprehend the author's thoughts remains
uncertain. This limitation hinders the practical applications of narrative
understanding. In this paper, we conduct a comprehensive survey of narrative
understanding tasks, thoroughly examining their key features, definitions,
taxonomy, associated datasets, training objectives, evaluation metrics, and
limitations. Furthermore, we explore the potential of expanding the
capabilities of modularized LLMs to address novel narrative understanding
tasks. By framing narrative understanding as the retrieval of the author's
imaginative cues that outline the narrative structure, our study introduces a
fresh perspective on enhancing narrative comprehension
Plan-And-Write: Towards Better Automatic Storytelling
Automatic storytelling is challenging since it requires generating long,
coherent natural language to describes a sensible sequence of events. Despite
considerable efforts on automatic story generation in the past, prior work
either is restricted in plot planning, or can only generate stories in a narrow
domain. In this paper, we explore open-domain story generation that writes
stories given a title (topic) as input. We propose a plan-and-write
hierarchical generation framework that first plans a storyline, and then
generates a story based on the storyline. We compare two planning strategies.
The dynamic schema interweaves story planning and its surface realization in
text, while the static schema plans out the entire storyline before generating
stories. Experiments show that with explicit storyline planning, the generated
stories are more diverse, coherent, and on topic than those generated without
creating a full plan, according to both automatic and human evaluations.Comment: Accepted by AAAI 201
Event Representations for Automated Story Generation with Deep Neural Nets
Automated story generation is the problem of automatically selecting a
sequence of events, actions, or words that can be told as a story. We seek to
develop a system that can generate stories by learning everything it needs to
know from textual story corpora. To date, recurrent neural networks that learn
language models at character, word, or sentence levels have had little success
generating coherent stories. We explore the question of event representations
that provide a mid-level of abstraction between words and sentences in order to
retain the semantic information of the original data while minimizing event
sparsity. We present a technique for preprocessing textual story data into
event sequences. We then present a technique for automated story generation
whereby we decompose the problem into the generation of successive events
(event2event) and the generation of natural language sentences from events
(event2sentence). We give empirical results comparing different event
representations and their effects on event successor generation and the
translation of events to natural language.Comment: Submitted to AAAI'1
Story Ending Generation with Incremental Encoding and Commonsense Knowledge
Generating a reasonable ending for a given story context, i.e., story ending
generation, is a strong indication of story comprehension. This task requires
not only to understand the context clues which play an important role in
planning the plot but also to handle implicit knowledge to make a reasonable,
coherent story.
In this paper, we devise a novel model for story ending generation. The model
adopts an incremental encoding scheme to represent context clues which are
spanning in the story context. In addition, commonsense knowledge is applied
through multi-source attention to facilitate story comprehension, and thus to
help generate coherent and reasonable endings. Through building context clues
and using implicit knowledge, the model is able to produce reasonable story
endings. context clues implied in the post and make the inference based on it.
Automatic and manual evaluation shows that our model can generate more
reasonable story endings than state-of-the-art baselines.Comment: Accepted in AAAI201
Towards a crowdsourced solution for the authoring bottleneck in interactive narratives
Interactive Storytelling research has produced a wealth of technologies that can be
employed to create personalised narrative experiences, in which the audience takes
a participating rather than observing role. But so far this technology has not led
to the production of large scale playable interactive story experiences that realise
the ambitions of the field. One main reason for this state of affairs is the difficulty
of authoring interactive stories, a task that requires describing a huge amount of
story building blocks in a machine friendly fashion. This is not only technically
and conceptually more challenging than traditional narrative authoring but also a
scalability problem.
This thesis examines the authoring bottleneck through a case study and a literature
survey and advocates a solution based on crowdsourcing. Prior work has already
shown that combining a large number of example stories collected from crowd workers
with a system that merges these contributions into a single interactive story can be
an effective way to reduce the authorial burden. As a refinement of such an approach,
this thesis introduces the novel concept of Crowd Task Adaptation. It argues that in
order to maximise the usefulness of the collected stories, a system should dynamically
and intelligently analyse the corpus of collected stories and based on this analysis
modify the tasks handed out to crowd workers.
Two authoring systems, ENIGMA and CROSCAT, which show two radically different
approaches of using the Crowd Task Adaptation paradigm have been implemented and
are described in this thesis. While ENIGMA adapts tasks through a realtime dialog
between crowd workers and the system that is based on what has been learned from
previously collected stories, CROSCAT modifies the backstory given to crowd workers
in order to optimise the distribution of branching points in the tree structure that
combines all collected stories. Two experimental studies of crowdsourced authoring
are also presented. They lead to guidelines on how to employ crowdsourced authoring
effectively, but more importantly the results of one of the studies demonstrate the
effectiveness of the Crowd Task Adaptation approach
- …