196 research outputs found

    AMR Dependency Parsing with a Typed Semantic Algebra

    Full text link
    We present a semantic parser for Abstract Meaning Representations which learns to parse strings into tree representations of the compositional structure of an AMR graph. This allows us to use standard neural techniques for supertagging and dependency tree parsing, constrained by a linguistically principled type system. We present two approximative decoding algorithms, which achieve state-of-the-art accuracy and outperform strong baselines.Comment: This paper will be presented at ACL 2018 (see https://acl2018.org/programme/papers/

    Sponsoring, brand value and social media

    Get PDF
    The increasing involvement of individuals in social media over the past decade has enabled firms to pursue new avenues in communication and sponsoring activities. Besides general research on either social media or sponsoring, questions regarding the consequences of a joint activity (sponsoring activities in social media) remain unexplored. Hence, the present study analyses whether the perceived image of the brand and the celebrity endorser credibility of a top sports team influence the perceived brand value of the sponsoring firm in a social media setting. Moreover, these effects are compared between existing customers and non-customers of the sponsoring firm. Interestingly, perceived celebrity endorser credibility plays no role in forming brand value perceptions in the case of the existing customers. Implications for marketing theory and practice are derived. (authors' abstract

    Fast semantic parsing with well-typedness guarantees

    Full text link
    AM dependency parsing is a linguistically principled method for neural semantic parsing with high accuracy across multiple graphbanks. It relies on a type system that models semantic valency but makes existing parsers slow. We describe an A* parser and a transition-based parser for AM dependency parsing which guarantee well-typedness and improve parsing speed by up to 3 orders of magnitude, while maintaining or improving accuracy.Comment: Accepted at EMNLP 2020, camera-ready versio

    Compositional Generalisation with Structured Reordering and Fertility Layers

    Full text link
    Seq2seq models have been shown to struggle with compositional generalisation, i.e. generalising to new and potentially more complex structures than seen during training. Taking inspiration from grammar-based models that excel at compositional generalisation, we present a flexible end-to-end differentiable neural model that composes two structural operations: a fertility step, which we introduce in this work, and a reordering step based on previous work (Wang et al., 2021). Our model outperforms seq2seq models by a wide margin on challenging compositional splits of realistic semantic parsing tasks that require generalisation to longer examples. It also compares favourably to other models targeting compositional generalisation

    Strengthening structural inductive biases by pre-training to perform syntactic transformations

    Get PDF
    Models need appropriate inductive biases to effectively learn from small amounts of data and generalize systematically outside of the training distribution. While Transformers are highly versatile and powerful, they can still benefit from enhanced structural inductive biases for seq2seq tasks, especially those involving syntactic transformations, such as converting active to passive voice or semantic parsing. In this paper, we propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training to perform synthetically generated syntactic transformations of dependency trees given a description of the transformation. Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking, and also improves structural generalization for semantic parsing. Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token, and that the model can leverage these attention heads on downstream tasks

    Christian Koller im Interview mit Matthias Venetz, Neue Zürcher Zeitung, 01.02.2024: "Deutsche streiken häufiger als Schweizer"

    Get PDF
    Ab Donnerstag (01.02.2024) bestreikt die Gewerkschaft Verdi deutsche Flughäfen. Die Streikwelle in Deutschland geht weiter. In der Schweiz wäre so etwas unvorstellbar, sagt Christian Koller, der Direktor des Schweizerischen Sozialarchivs
    corecore