520 research outputs found
Content Differences in Syntactic and Semantic Representations
Syntactic analysis plays an important role in semantic parsing, but the
nature of this role remains a topic of ongoing debate. The debate has been
constrained by the scarcity of empirical comparative studies between syntactic
and semantic schemes, which hinders the development of parsing methods informed
by the details of target schemes and constructions. We target this gap, and
take Universal Dependencies (UD) and UCCA as a test case. After abstracting
away from differences of convention or formalism, we find that most content
divergences can be ascribed to: (1) UCCA's distinction between a Scene and a
non-Scene; (2) UCCA's distinction between primary relations, secondary ones and
participants; (3) different treatment of multi-word expressions, and (4)
different treatment of inter-clause linkage. We further discuss the long tail
of cases where the two schemes take markedly different approaches. Finally, we
show that the proposed comparison methodology can be used for fine-grained
evaluation of UCCA parsing, highlighting both challenges and potential sources
for improvement. The substantial differences between the schemes suggest that
semantic parsers are likely to benefit downstream text understanding
applications beyond their syntactic counterparts.Comment: NAACL-HLT 2019 camera read
Transition-based Semantic Role Labeling with Pointer Networks
Semantic role labeling (SRL) focuses on recognizing the predicate-argument
structure of a sentence and plays a critical role in many natural language
processing tasks such as machine translation and question answering.
Practically all available methods do not perform full SRL, since they rely on
pre-identified predicates, and most of them follow a pipeline strategy, using
specific models for undertaking one or several SRL subtasks. In addition,
previous approaches have a strong dependence on syntactic information to
achieve state-of-the-art performance, despite being syntactic trees equally
hard to produce. These simplifications and requirements make the majority of
SRL systems impractical for real-world applications. In this article, we
propose the first transition-based SRL approach that is capable of completely
processing an input sentence in a single left-to-right pass, with neither
leveraging syntactic information nor resorting to additional modules. Thanks to
our implementation based on Pointer Networks, full SRL can be accurately and
efficiently done in , achieving the best performance to date on the
majority of languages from the CoNLL-2009 shared task.Comment: Final peer-reviewed manuscript accepted for publication in
Knowledge-Based System
Analyzing, enhancing, optimizing and applying dependency analysis
Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de IngenierÃa del Software e Inteligencia Artificial, leÃda el 19/12/2012Los analizadores de dependencias estadÃsticos han sido mejorados en gran medida durante los últimos años. Esto ha sido posible gracias a los sistemas basados en aprendizaje automático que muestran una gran precisión. Estos sistemas permiten la generación de parsers para idiomas en los que se disponga de un corpus adecuado sin causar, para ello, un gran esfuerzo en el usuario final. MaltParser es uno de estos sistemas. En esta tesis hemos usado sistemas del estado del arte, para mostrar una serie de contribuciones completamente relacionadas con el procesamiento de lenguaje natural (PLN) y análisis de dependencias: (i) Estudio del problema del análisis de dependencias demostrando la homogeneidad en la precisión y mostrando contribuciones interesantes sobre la longitud de las frases, el tamaño de los corpora de entrenamiento y como evaluamos los parsers. (ii) Hemos estudiado además algunas maneras de mejorar la precisión modificando el flujo de análisis de dos maneras distintas, analizando algunos segmentos de las frases de manera separada, y modificando el comportamiento interno de los algoritmos de parsing. (iii) Hemos investigado la selección automática de atributos para aprendizaje máquina para analizadores de dependencias basados en transiciones que consideramos un importante problema y algo que realmente es necesario resolver dado el estado de la cuestión, ya que además puede servir para resolver de mejor manera tareas relacionadas con el análisis de dependencias. (iv) Finalmente, hemos aplicado el análisis de dependencias para resolver algunos problemas, hoy en dÃa importantes, para el procesamiento de lenguage natural (PLN) como son la simplificación de textos o la inferencia del alcance de señales de negación. Por último, añadir que el conocimiento adquirido en la realización de esta tesis puede usarse para implementar aplicaciones basadas en análisis de dependencias más robustas en PLN o en otras áreas relacionadas, como se demuestra a lo largo de la tesis.
[ABSTRACT] Statistical dependency parsing accuracy has been improved substantially during the last years. One of the main reasons is the inclusion of data- driven (or machine learning) based methods. Machine learning allows the development of parsers for every language that has an adequate training corpus without requiring a great effort. MaltParser is one of such systems. In the present thesis we have used state of the art systems (mainly Malt- Parser), to show some contributions in four different areas inherently related to natural language processing (NLP) and dependency parsing: (i) We stu- died the parsing problem demonstrating the homogeneity of the performance and showing interesting contributions about sentence length, corpora size and how we normally evaluate the parsers. (ii) We have also tried some ways of improving the parsing accuracy by modifying the flow of analysis, parsing some segments of the sentences separately by finally constructing a parsing combination problem. We also studied the modification of the inter- nal behavior of the parsers focusing on the root of dependency structures, which is an important part of what a dependency parser parses and worth studying. (iii) We have researched automatic feature selection and parsing optimization for transition based parsers which we consider an important problem and something that definitely needs to be done in dependency par- sing in order to solve parsing problems in a more successful way. And (iv) we have applied syntactic dependency structures and dependency parsing to solve some Natural Language Processing (NLP) problems such as text simplification and inferring the scope of negation cues. Furthermore, the knowledge acquired when developing this thesis could be used to implement more robust dependency parsing–based applications in different NLP (or related) areas, as we demonstrate in the present thesis.Depto. de IngenierÃa de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu
A multi-level methodology for the automated translation of a coreference resolution dataset: an application to the Italian language
In the last decade, the demand for readily accessible corpora has touched all areas of natural language processing, including
coreference resolution. However, it is one of the least considered sub-fields in recent developments. Moreover, almost all
existing resources are only available for the English language. To overcome this lack, this work proposes a methodology to
create a corpus for coreference resolution in Italian exploiting knowledge of annotated resources in other languages.
Starting from OntonNotes, the methodology translates and refines English utterances to obtain utterances respecting Italian
grammar, dealing with language-specific phenomena and preserving coreference and mentions. A quantitative and qualitative
evaluation is performed to assess the well-formedness of generated utterances, considering readability, grammaticality,
and acceptability indexes. The results have confirmed the effectiveness of the methodology in generating a good
dataset for coreference resolution starting from an existing one. The goodness of the dataset is also assessed by training a
coreference resolution model based on BERT language model, achieving the promising results. Even if the methodology
has been tailored for English and Italian languages, it has a general basis easily extendable to other languages, adapting a
small number of language-dependent rules to generalize most of the linguistic phenomena of the language under
examination
Coreference Resolution through a seq2seq Transition-Based System
Most recent coreference resolution systems use search algorithms over
possible spans to identify mentions and resolve coreference. We instead present
a coreference resolution system that uses a text-to-text (seq2seq) paradigm to
predict mentions and links jointly. We implement the coreference system as a
transition system and use multilingual T5 as an underlying language model. We
obtain state-of-the-art accuracy on the CoNLL-2012 datasets with 83.3 F1-score
for English (a 2.3 higher F1-score than previous work (Dobrovolskii, 2021))
using only CoNLL data for training, 68.5 F1-score for Arabic (+4.1 higher than
previous work) and 74.3 F1-score for Chinese (+5.3). In addition we use the
SemEval-2010 data sets for experiments in the zero-shot setting, a few-shot
setting, and supervised setting using all available training data. We get
substantially higher zero-shot F1-scores for 3 out of 4 languages than previous
approaches and significantly exceed previous supervised state-of-the-art
results for all five tested languages
- …