20 research outputs found

    Hierarchical Quantized Representations for Script Generation

    Full text link
    Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modeling-based method.Comment: EMNLP 201

    Event Representations with Tensor-based Compositions

    Full text link
    Robust and flexible event representations are important to many core areas in language understanding. Scripts were proposed early on as a way of representing sequences of events for such understanding, and has recently attracted renewed attention. However, obtaining effective representations for modeling script-like event sequences is challenging. It requires representations that can capture event-level and scenario-level semantics. We propose a new tensor-based composition method for creating event representations. The method captures more subtle semantic interactions between an event and its entities and yields representations that are effective at multiple event-related tasks. With the continuous representations, we also devise a simple schema generation method which produces better schemas compared to a prior discrete representation based method. Our analysis shows that the tensors capture distinct usages of a predicate even when there are only subtle differences in their surface realizations.Comment: Accepted at AAAI 201

    Extending Explanation-Based Learning: Failure-Driven Schema Refinement

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryOffice of Naval Research / N00014-86-K-030

    Integrated Learning of Words and Their Underlying Concepts

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryOffice of Naval Research / N00014-86-K-030

    Schema Acquisition from One Example: Psychological Evidence for Explanation-Based Learning

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryOffice of Naval Research / N00014-86-K-0309University of Illinois Cognitive Science/AI fellowship

    Building and Refining Abstract Planning Cases by Change of Representation Language

    Full text link
    ion is one of the most promising approaches to improve the performance of problem solvers. In several domains abstraction by dropping sentences of a domain description -- as used in most hierarchical planners -- has proven useful. In this paper we present examples which illustrate significant drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract. However, to achieve a powerful change of the representation language, the abstract language itself as well as rules which describe admissible ways of abstracting states must be provided in the domain model. This new abstraction approach is the core of Paris (Plan Abstraction and Refinement in an Integrated System), a system in which abstract planning cases are automatically learned from given concrete cases. An empirical study in the domain of process planning in mechanical engineering shows significant advantages of the proposed reasoning from abstract cases over classical hierarchical planning.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    A Domain Independent Explanation-Based Generalizer

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryONR / N00014-86-K-0309National Science Foundation / IST-83-1788
    corecore