35 research outputs found
Review of coreference resolution in English and Persian
Coreference resolution (CR) is one of the most challenging areas of natural
language processing. This task seeks to identify all textual references to the
same real-world entity. Research in this field is divided into coreference
resolution and anaphora resolution. Due to its application in textual
comprehension and its utility in other tasks such as information extraction
systems, document summarization, and machine translation, this field has
attracted considerable interest. Consequently, it has a significant effect on
the quality of these systems. This article reviews the existing corpora and
evaluation metrics in this field. Then, an overview of the coreference
algorithms, from rule-based methods to the latest deep learning techniques, is
provided. Finally, coreference resolution and pronoun resolution systems in
Persian are investigated.Comment: 44 pages, 11 figures, 5 table
A multi-level methodology for the automated translation of a coreference resolution dataset: an application to the Italian language
In the last decade, the demand for readily accessible corpora has touched all areas of natural language processing, including
coreference resolution. However, it is one of the least considered sub-fields in recent developments. Moreover, almost all
existing resources are only available for the English language. To overcome this lack, this work proposes a methodology to
create a corpus for coreference resolution in Italian exploiting knowledge of annotated resources in other languages.
Starting from OntonNotes, the methodology translates and refines English utterances to obtain utterances respecting Italian
grammar, dealing with language-specific phenomena and preserving coreference and mentions. A quantitative and qualitative
evaluation is performed to assess the well-formedness of generated utterances, considering readability, grammaticality,
and acceptability indexes. The results have confirmed the effectiveness of the methodology in generating a good
dataset for coreference resolution starting from an existing one. The goodness of the dataset is also assessed by training a
coreference resolution model based on BERT language model, achieving the promising results. Even if the methodology
has been tailored for English and Italian languages, it has a general basis easily extendable to other languages, adapting a
small number of language-dependent rules to generalize most of the linguistic phenomena of the language under
examination
Attribution: a computational approach
Our society is overwhelmed with an ever growing amount of information. Effective
management of this information requires novel ways to filter and select the most relevant
pieces of information. Some of this information can be associated with the source
or sources expressing it. Sources and their relation to what they express affect information
and whether we perceive it as relevant, biased or truthful. In news texts in
particular, it is common practice to report third-party statements and opinions. Recognizing
relations of attribution is therefore a necessary step toward detecting statements
and opinions of specific sources and selecting and evaluating information on the basis
of its source.
The automatic identification of Attribution Relations has applications in numerous
research areas. Quotation and opinion extraction, discourse and factuality have
all partly addressed the annotation and identification of Attribution Relations. However,
disjoint efforts have provided a partial and partly inaccurate picture of attribution.
Moreover, these research efforts have generated small or incomplete resources, thus
limiting the applicability of machine learning approaches. Existing approaches to extract
Attribution Relations have focused on rule-based models, which are limited both
in coverage and precision.
This thesis presents a computational approach to attribution that recasts attribution
extraction as the identification of the attributed text, its source and the lexical cue linking
them in a relation. Drawing on preliminary data-driven investigation, I present a
comprehensive lexicalised approach to attribution and further refine and test a previously
defined annotation scheme. The scheme has been used to create a corpus annotated
with Attribution Relations, with the goal of contributing a large and complete
resource than can lay the foundations for future attribution studies.
Based on this resource, I developed a system for the automatic extraction of attribution
relations that surpasses traditional syntactic pattern-based approaches. The system
is a pipeline of classification and sequence labelling models that identify and link each
of the components of an attribution relation. The results show concrete opportunities
for attribution-based applications
Automatic generation of parallel treebanks: an efficient unsupervised system
The need for syntactically annotated data for use in natural language processing has increased dramatically in recent years. This is true especially for parallel treebanks, of which very few exist. The ones that exist are
mainly hand-crafted and too small for reliable use in data-oriented applications. In this work I introduce a novel open-source platform for the fast and robust automatic generation of parallel treebanks through sub-tree alignment, using a limited amount of external resources. The intrinsic and extrinsic evaluations that I undertook demonstrate that my system is a feasible alternative to the manual annotation of parallel treebanks. Therefore, I expect
the presented platform to help boost research in the field of syntaxaugmented machine translation and lead to advancements in other fields where parallel treebanks can be employed
Proceedings of the Conference on Natural Language Processing 2010
This book contains state-of-the-art contributions to the 10th
conference on Natural Language Processing, KONVENS 2010
(Konferenz zur Verarbeitung natürlicher Sprache), with a focus
on semantic processing.
The KONVENS in general aims at offering a broad perspective
on current research and developments within the interdisciplinary
field of natural language processing. The central theme
draws specific attention towards addressing linguistic aspects
ofmeaning, covering deep as well as shallow approaches to semantic
processing. The contributions address both knowledgebased
and data-driven methods for modelling and acquiring
semantic information, and discuss the role of semantic information
in applications of language technology.
The articles demonstrate the importance of semantic processing,
and present novel and creative approaches to natural
language processing in general. Some contributions put their
focus on developing and improving NLP systems for tasks like
Named Entity Recognition or Word Sense Disambiguation, or
focus on semantic knowledge acquisition and exploitation with
respect to collaboratively built ressources, or harvesting semantic
information in virtual games. Others are set within the
context of real-world applications, such as Authoring Aids, Text
Summarisation and Information Retrieval. The collection highlights
the importance of semantic processing for different areas
and applications in Natural Language Processing, and provides
the reader with an overview of current research in this field