1,718 research outputs found

    Generating Weather Forecast Texts with Case Based Reasoning

    Full text link
    Several techniques have been used to generate weather forecast texts. In this paper, case based reasoning (CBR) is proposed for weather forecast text generation because similar weather conditions occur over time and should have similar forecast texts. CBR-METEO, a system for generating weather forecast texts was developed using a generic framework (jCOLIBRI) which provides modules for the standard components of the CBR architecture. The advantage in a CBR approach is that systems can be built in minimal time with far less human effort after initial consultation with experts. The approach depends heavily on the goodness of the retrieval and revision components of the CBR process. We evaluated CBRMETEO with NIST, an automated metric which has been shown to correlate well with human judgements for this domain. The system shows comparable performance with other NLG systems that perform the same task.Comment: 6 page

    The precision electroweak data in warped extra-dimension models

    Full text link
    The Randall-Sundrum scenario with Standard Model fields in the bulk and a custodial symmetry is considered. We determine the several minimal quark representations allowing to address the anomalies in the forward-backward b-quark asymmetry A^b_FB, while reproducing the bottom and top masses via wave function overlaps. The calculated corrections of the Zbb coupling include the combined effects of mixings with both Kaluza-Klein excitations of gauge bosons and new b'-like states. It is shown that the mechanism, in which the left-handed doublet of third generation quarks results from a mixing on the UV boundary of introduced fields Q_1L and Q_2L, is necessary for phenomenological reasons. Within the obtained models, both the global fit of R_b with A^b_FB [at the various center of mass energies] and the fit of last precision electroweak data in the light fermion sector can simultaneously be improved significantly with respect to the pure Standard Model case, for M_KK = 3,4,5 TeV (first KK gauge boson) and a best-fit Higgs mass m_h > 115 GeV i.e. compatible with the LEP2 direct limit. The quantitative analysis of the oblique parameters S,T,U even shows that heavy Higgs mass values up to ~500 GeV may still give rise to an acceptable quality of the electroweak data fit, in contrast with the Standard Model. The set of obtained constraints on the parameter space, derived partly from precision electroweak data, is complementary of a future direct exploration of this parameter space at the LHC. In particular, we find that custodians, like b' modes, can be as light as ~1200 GeV i.e. a mass lying possibly in the potential reach of LHC.Comment: 24 pages, 8 figures. Added references, corrected typos and Higgs mass dependence discussion complete

    Controllable Neural Story Plot Generation via Reinforcement Learning

    Full text link
    Language-modeling--based approaches to story plot generation attempt to construct a plot by sampling from a language model (LM) to predict the next character, word, or sentence to add to the story. LM techniques lack the ability to receive guidance from the user to achieve a specific goal, resulting in stories that don't have a clear sense of progression and lack coherence. We present a reward-shaping technique that analyzes a story corpus and produces intermediate rewards that are backpropagated into a pre-trained LM in order to guide the model towards a given goal. Automated evaluations show our technique can create a model that generates story plots which consistently achieve a specified goal. Human-subject studies show that the generated stories have more plausible event ordering than baseline plot generation techniques.Comment: Published in IJCAI 201

    The automatic generation of narratives

    Get PDF
    We present the Narrator, a Natural Language Generation component used in a digital storytelling system. The system takes as input a formal representation of a story plot, in the form of a causal network relating the actions of the characters to their motives and their consequences. Based on this input, the Narrator generates a narrative in Dutch, by carrying out tasks such as constructing a Document Plan, performing aggregation and ellipsis and the generation of appropriate referring expressions. We describe how these tasks are performed and illustrate the process with examples, showing how this results in the generation of coherent and well-formed narrative texts

    Event Representations for Automated Story Generation with Deep Neural Nets

    Full text link
    Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a mid-level of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.Comment: Submitted to AAAI'1

    Imaginative Recall with Story Intention Graphs

    Get PDF
    Intelligent storytelling systems either formalize specific narrative structures proposed by narratologists (such as Propp and Bremond), or are founded on formal representations from artificial intelligence (such as plan structures from classical planning). This disparity in underlying knowledge representations leads to a lack of common evaluation metrics across story generation systems, particularly around the creativity aspect of generators. This paper takes Skald, a reconstruction of the Minstrel creative story generation system, and maps the representation to a formal narrative representation of Story Intention Graphs (SIG) proposed by Elson et al. This mapping facilitates the opportunity to expand the creative space of stories generated through imaginative recall in Minstrel while maintaining narrative complexity. We show that there is promise in using the SIG as an intermediate representation that is useful for evaluation of story generation systems
    • 

    corecore