507 research outputs found

    Joint Anaphoricity Detection and Coreference Resolution with Constrained Latent Structures

    Get PDF
    International audienceThis paper introduces a new structured model for learninganaphoricity detection and coreference resolution in a jointfashion. Specifically, we use a latent tree to represent the fullcoreference and anaphoric structure of a document at a globallevel, and we jointly learn the parameters of the two modelsusing a version of the structured perceptron algorithm.Our joint structured model is further refined by the use ofpairwise constraints which help the model to capture accuratelycertain patterns of coreference. Our experiments on theCoNLL-2012 English datasets show large improvements inboth coreference resolution and anaphoricity detection, comparedto various competing architectures. Our best coreferencesystem obtains a CoNLL score of 81:97 on gold mentions,which is to date the best score reported on this setting

    Conditional Random Field Autoencoders for Unsupervised Structured Prediction

    Full text link
    We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input's latent representation is predicted conditional on the observable data using a feature-rich conditional random field. Then a reconstruction of the input is (re)generated, conditional on the latent structure, using models for which maximum likelihood estimation has a closed-form. Our autoencoder formulation enables efficient learning without making unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate insightful connections to traditional autoencoders, posterior regularization and multi-view learning. We show competitive results with instantiations of the model for two canonical NLP tasks: part-of-speech induction and bitext word alignment, and show that training our model can be substantially more efficient than comparable feature-rich baselines

    Advanced models of supervised structural clustering

    Get PDF
    The strength and power of structured prediction approaches in machine learning originates from a proper recognition and exploitation of inherent structural dependencies within complex objects, which structural models are trained to output. Among the complex tasks that benefited from structured prediction approaches, clustering is of a special interest. Structured output models based on representing clusters by latent graph structures made the task of supervised clustering tractable. While in practice these models proved effective in solving the complex NLP task of coreference resolution, in this thesis, we aim at exploring their capacity to be extended to other tasks and domains, as well as the methods for performing such adaptation and for improvement in general, which, as a result, go beyond clustering and are commonly applicable in structured prediction. Studying the extensibility of the structural approaches for supervised clustering, we apply them to two different domains in two different ways. First, in the networking domain, we do clustering of network traffic by adapting the model, taking into account the continuity of incoming data. Our experiments demonstrate that the structural clustering approach is not only effective in such a scenario, but also, if changing the perspective, provides a novel potentially useful tool for detecting anomalies. The other part of our work is dedicated to assessing the amenability of the structural clustering model to joint learning with another structural model, for ranking. Our preliminary analysis in the context of the task of answer-passage reranking in question answering reveals a potential benefit of incorporating auxiliary clustering structures. Due to the intrinsic complexity of the clustering task and, respectively, its evaluation scenarios, it gave us grounds for studying the possibility and the effect from optimizing task-specific complex measures in structured prediction algorithms. It is common for structured prediction approaches to optimize surrogate loss functions, rather than the actual task-specific ones, in or- der to facilitate inference and preserve efficiency. In this thesis, we, first, study when surrogate losses are sufficient and, second, make a step towards enabling direct optimization of complex structural loss functions. We propose to learn an approximation of a complex loss by a regressor from data. We formulate a general structural framework for learning with a learned loss, which, applied to a particular case of a clustering problem – coreference resolution, i) enables the optimization of a coreference metric, by itself, having high computational complexity, and ii) delivers an improvement over the standard structural models optimizing simple surrogate objectives. We foresee this idea being helpful in many structured prediction applications, also as a means of adaptation to specific evaluation scenarios, and especially when a good loss approximation is found by a regressor from an induced feature space allowing good factorization over the underlying structure

    A Unified View of Evaluation Metrics for Structured Prediction

    Full text link
    We present a conceptual framework that unifies a variety of evaluation metrics for different structured prediction tasks (e.g. event and relation extraction, syntactic and semantic parsing). Our framework requires representing the outputs of these tasks as objects of certain data types, and derives metrics through matching of common substructures, possibly followed by normalization. We demonstrate how commonly used metrics for a number of tasks can be succinctly expressed by this framework, and show that new metrics can be naturally derived in a bottom-up way based on an output structure. We release a library that enables this derivation to create new metrics. Finally, we consider how specific characteristics of tasks motivate metric design decisions, and suggest possible modifications to existing metrics in line with those motivations.Comment: Accepted at EMNLP2023 Main Trac

    Improving Coreference Resolution by Leveraging Entity-Centric Features with Graph Neural Networks and Second-order Inference

    Full text link
    One of the major challenges in coreference resolution is how to make use of entity-level features defined over clusters of mentions rather than mention pairs. However, coreferent mentions usually spread far apart in an entire text, which makes it extremely difficult to incorporate entity-level features. We propose a graph neural network-based coreference resolution method that can capture the entity-centric information by encouraging the sharing of features across all mentions that probably refer to the same real-world entity. Mentions are linked to each other via the edges modeling how likely two linked mentions point to the same entity. Modeling by such graphs, the features between mentions can be shared by message passing operations in an entity-centric manner. A global inference algorithm up to second-order features is also presented to optimally cluster mentions into consistent groups. Experimental results show our graph neural network-based method combing with the second-order decoding algorithm (named GNNCR) achieved close to state-of-the-art performance on the English CoNLL-2012 Shared Task dataset
    • …
    corecore