738 research outputs found

    Use Generalized Representations, But Do Not Forget Surface Features

    Get PDF
    Only a year ago, all state-of-the-art coreference resolvers were using an extensive amount of surface features. Recently, there was a paradigm shift towards using word embeddings and deep neural networks, where the use of surface features is very limited. In this paper, we show that a simple SVM model with surface features outperforms more complex neural models for detecting anaphoric mentions. Our analysis suggests that using generalized representations and surface features have different strength that should be both taken into account for improving coreference resolution.Comment: CORBON workshop@EACL 201

    The definiteness hierarchy and strength of anaphoric link in Polish

    Get PDF
    Chapter FourteenZadanie pt. „Digitalizacja i udostępnienie w Cyfrowym Repozytorium Uniwersytetu Łódzkiego kolekcji czasopism naukowych wydawanych przez Uniwersytet Łódzki” nr 885/P-DUN/2014 zostało dofinansowane ze środków MNiSW w ramach działalności upowszechniającej nauk

    Notions of focus anaphoricity

    Get PDF
    This article reviews some of the theoretical notions and empirical phenomena which figure in current formal-semantic theories of focus. It also develops the connection between “alternative semantics” and “givenness” accounts of focus interpretation

    Combining Dependency and Constituent-based Syntactic Information for Anaphoricity Determination in Coreference Resolution

    Get PDF

    Reviving the parameter revolution in semantics

    Get PDF
    Montague and Kaplan began a revolution in semantics, which promised to explain how a univocal expression could make distinct truth-conditional contributions in its various occurrences. The idea was to treat context as a parameter at which a sentence is semantically evaluated. But the revolution has stalled. One salient problem comes from recurring demonstratives: "He is tall and he is not tall". For the sentence to be true at a context, each occurrence of the demonstrative must make a different truth-conditional contribution. But this difference cannot be accounted for by standard parameter sensitivity. Semanticists, consoled by the thought that this ambiguity would ultimately be needed anyhow to explain anaphora, have been too content to posit massive ambiguities in demonstrative pronouns. This article aims to revived the parameter revolution by showing how to treat demonstrative pronouns as univocal while providing an account of anaphora that doesn't end up re-introducing the ambiguity

    Graph-Cut-Based Anaphoricity Determination for Coreference Resolution

    Get PDF
    Recent work has shown that explicitly identifying and filtering non-anaphoric mentions prior to coreference resolution can improve the performance of a coreference system. We present a novel approach to this task of anaphoricity determination based on graph cuts, and demonstrate its superiority to competing approaches by comparing their effectiveness in improving a learning-based coreference system on the ACE data sets.

    Joint Anaphoricity Detection and Coreference Resolution with Constrained Latent Structures

    Get PDF
    International audienceThis paper introduces a new structured model for learninganaphoricity detection and coreference resolution in a jointfashion. Specifically, we use a latent tree to represent the fullcoreference and anaphoric structure of a document at a globallevel, and we jointly learn the parameters of the two modelsusing a version of the structured perceptron algorithm.Our joint structured model is further refined by the use ofpairwise constraints which help the model to capture accuratelycertain patterns of coreference. Our experiments on theCoNLL-2012 English datasets show large improvements inboth coreference resolution and anaphoricity detection, comparedto various competing architectures. Our best coreferencesystem obtains a CoNLL score of 81:97 on gold mentions,which is to date the best score reported on this setting
    corecore