4,797 research outputs found

    A Linguistically-motivated 2-stage Tree to Graph Transformation

    Get PDF
    International audienceWe propose a new model for transforming dependency trees into target graphs, relying on two distinct stages. During the first stage, standard local tree transformation rules based on patterns are applied to collect a first set of constrained edges to be added to the target graph. In the second stage, motivated by linguistic considerations, the constraints on edges may be used to displace them or their neighbour edges upwards, or to build new mirror edges. The main advantages of this model is to simplify the design of a transformation scheme, with a smaller set of simpler local rules for the first stage, and good properties of termination and confluence for the second level.Nous proposons un nouveau modèle de transformation des arbres de dépendance en graphes, en s'appuyant sur 2 phases distinctes. Durant la première phase, des règles locales classiques de transformation d'arbres, fondées sur des motifs, sont appliquées pour collecter un premier jeu d'arcs avec contraintes devant être ajouté au graphe cible. Dans la seconde phase, motivées par des considérations linguistiques, les contraintes sur les arcs sont utilisées pour déplacer vers le haut ceux-ci ou leurs voisins, ou pour construire des arcs miroir. Les principaux avantages de ce modèle est la simplification la mise au point d'un schéma de transformation, avec un jeu plus réduit de règles locales plus simples, ainsi que de meilleure propriétés de terminaison et de confluence pour le second niveau

    A Linguistically-motivated 2-stage Tree to Graph Transformation

    Get PDF
    International audienceWe propose a new model for transforming dependency trees into target graphs, relying on two distinct stages. During the first stage, standard local tree transformation rules based on patterns are applied to collect a first set of constrained edges to be added to the target graph. In the second stage, motivated by linguistic considerations, the constraints on edges may be used to displace them or their neighbour edges upwards, or to build new mirror edges. The main advantages of this model is to simplify the design of a transformation scheme, with a smaller set of simpler local rules for the first stage, and good properties of termination and confluence for the second level.Nous proposons un nouveau modèle de transformation des arbres de dépendance en graphes, en s'appuyant sur 2 phases distinctes. Durant la première phase, des règles locales classiques de transformation d'arbres, fondées sur des motifs, sont appliquées pour collecter un premier jeu d'arcs avec contraintes devant être ajouté au graphe cible. Dans la seconde phase, motivées par des considérations linguistiques, les contraintes sur les arcs sont utilisées pour déplacer vers le haut ceux-ci ou leurs voisins, ou pour construire des arcs miroir. Les principaux avantages de ce modèle est la simplification la mise au point d'un schéma de transformation, avec un jeu plus réduit de règles locales plus simples, ainsi que de meilleure propriétés de terminaison et de confluence pour le second niveau

    A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge

    Full text link
    We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results.Comment: 25 pages, 10 figure

    Constraint Based Hybrid Approach to Parsing Indian Languages

    Get PDF
    PACLIC 23 / City University of Hong Kong / 3-5 December 200

    A syntactic language model based on incremental CCG parsing

    Get PDF
    Syntactically-enriched language models (parsers) constitute a promising component in applications such as machine translation and speech-recognition. To maintain a useful level of accuracy, existing parsers are non-incremental and must span a combinatorially growing space of possible structures as every input word is processed. This prohibits their incorporation into standard linear-time decoders. In this paper, we present an incremental, linear-time dependency parser based on Combinatory Categorial Grammar (CCG) and classification techniques. We devise a deterministic transform of CCGbank canonical derivations into incremental ones, and train our parser on this data. We discover that a cascaded, incremental version provides an appealing balance between efficiency and accuracy

    Visual Semantic Parsing: From Images to Abstract Meaning Representation

    Full text link
    The success of scene graphs for visual scene understanding has brought attention to the benefits of abstracting a visual input (e.g., image) into a structured representation, where entities (people and objects) are nodes connected by edges specifying their relations. Building these representations, however, requires expensive manual annotation in the form of images paired with their scene graphs or frames. These formalisms remain limited in the nature of entities and relations they can capture. In this paper, we propose to leverage a widely-used meaning representation in the field of natural language processing, the Abstract Meaning Representation (AMR), to address these shortcomings. Compared to scene graphs, which largely emphasize spatial relationships, our visual AMR graphs are more linguistically informed, with a focus on higher-level semantic concepts extrapolated from visual input. Moreover, they allow us to generate meta-AMR graphs to unify information contained in multiple image descriptions under one representation. Through extensive experimentation and analysis, we demonstrate that we can re-purpose an existing text-to-AMR parser to parse images into AMRs. Our findings point to important future research directions for improved scene understanding.Comment: published in CoNLL 202
    corecore