11 research outputs found

    Using Sentence-Level LSTM Language Models for Script Inference

    Get PDF
    Abstract There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents. These systems operate on structured verb-argument events produced by an NLP pipeline. We compare these systems with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents

    Understanding Actors and Evaluating Personae with Gaussian Embeddings

    Full text link
    Understanding narrative content has become an increasingly popular topic. Nonetheless, research on identifying common types of narrative characters, or personae, is impeded by the lack of automatic and broad-coverage evaluation methods. We argue that computationally modeling actors provides benefits, including novel evaluation mechanisms for personae. Specifically, we propose two actor-modeling tasks, cast prediction and versatility ranking, which can capture complementary aspects of the relation between actors and the characters they portray. For an actor model, we present a technique for embedding actors, movies, character roles, genres, and descriptive keywords as Gaussian distributions and translation vectors, where the Gaussian variance corresponds to actors' versatility. Empirical results indicate that (1) the technique considerably outperforms TransE (Bordes et al. 2013) and ablation baselines and (2) automatically identified persona topics (Bamman, O'Connor, and Smith 2013) yield statistically significant improvements in both tasks, whereas simplistic persona descriptors including age and gender perform inconsistently, validating prior research.Comment: Accepted at AAAI 201
    corecore