436 research outputs found
Transfer and Multi-Task Learning for Noun-Noun Compound Interpretation
In this paper, we empirically evaluate the utility of transfer and multi-task
learning on a challenging semantic classification task: semantic interpretation
of noun--noun compounds. Through a comprehensive series of experiments and
in-depth error analysis, we show that transfer learning via parameter
initialization and multi-task learning via parameter sharing can help a neural
classification model generalize over a highly skewed distribution of relations.
Further, we demonstrate how dual annotation with two distinct sets of relations
over the same set of compounds can be exploited to improve the overall accuracy
of a neural classifier and its F1 scores on the less frequent, but more
difficult relations.Comment: EMNLP 2018: Conference on Empirical Methods in Natural Language
Processing (EMNLP
Las Relaciones Semánticas Predicen la Desambiguación Estructural de las Unidades Terminológicas Poliléxicas con Tres Formantes
For English multiword terms (MWTs) of three or more constituents (e.g., sea level rise), a semantic analysis, based on linguistic and domain knowledge, is necessary to resolve the dependency between components. This structural disambiguation, often known as bracketing, involves the grouping of the dependent components so that the MWT is reduced to its basic form of modifier+head, as in [sea level] [rise]. Knowledge of these dependencies facilitates the comprehension of an MWT and its accurate translation into other languages. Moreover, the resolution of MWT bracketing provides a higher overall accuracy in machine translation systems and sentence parsers. This paper thus presents a pilot study that explored whether the bracketing of a ternary compound, when used as an argument in a sentence, can be predicted from the semantic information encoded in that sentence. It is shown that, with a random forest model, the semantic relation of the MWT to another argument in the same sentence, the lexical domain of the predicate, and the semantic role of the MWT were able to predict the bracketing of the 190 ternary compounds used as arguments in a sample of 188 semantically annotated sentences from a Coastal Engineering corpus (100% F1-score). Furthermore, only the semantic relation of an MWT to another argument in the same sentence proved enormous capability to predict ternary compound bracketing with a binary decision-tree model (94.12%F1-score).En unidades terminológicas poliléxicas (UTP) con tres o más formantes en lengua inglesa (p.ej., sea level rise), establecer la dependencia entre dichos formantes requiere de un análisis lingüístico y de conocimiento especializado del área concreta en que se emplean las UTP. Esta desambiguación estructural, o bracketing, implica el agrupamiento de los formantes para reducir la UTP a su estructura básica de modificador+núcleo, como en [sea level] [rise]. Conocer el bracketing de una UTP no solo facilita su comprensión y traducción a otras lenguas, sino que también mejora el desempeño de los sistemas de traducción automática y de los analizadores sintácticos. Por tanto, en este artículo presentamos un estudio piloto que explora si el bracketing de una UTP con tres formantes, al emplearse como argumento en una oración, puede predecirse a partir de la información semántica codificada en dicha oración. Se muestra que, con un modelo random forest, la relación semántica de la UTP con otro argumento en la misma oración, el dominio léxico del verbo y el rol semántico de la UTP son capaces de predecir el bracketing de las 190 UTP ternarias que se usan como argumento en una muestra de 188 oraciones, anotadas semánticamente y extraídas de un corpus sobre ingeniería de costas (con un valor de F1 del 100%). Además, únicamente la relación semántica que mantiene una UTP ternaria con otro argumento en la misma oración posee una enorme capacidad para predecir su bracketing mediante un árbol de decisión binario (con un valor de F1 del 94,12%).This research was carried out as part of projects PID2020-118369GB-I00, "Transversal Integration of Culture in a Terminological Knowledge Base on Environment" (TRANSCULTURE), funded by the Spanish Ministry of Science and Innovation; and A-HUM-600-UGR20, "Culture as Transversal Module in a Terminological Knowledge Base on the Environment" (CULTURAMA), funded by the Andalusian Ministry of Economy, Knowledge, Business, and University
How Would You Name This?: An Empirical Study on English and Spanish N-N Compounds Production
English and Spanish are languages that differ in terms of grammatical aspects. One of
these aspects bears upon the formation of N-N compounds, a grammatical structure very
productive in English but not in Spanish. The present dissertation aims to provide an
empirical approach to the study of both the production and the interpretation of N-N
compounds that will take into account some of the syntactic and semantic features of
these structures. In the experiment, L1 Spanish L2 English speakers with a different
competence in English have to name various pictures using an N-N compound. The
analysis of the syntactic and semantic properties of the target N-N compounds will shed
light on how these features determine the production of N-N compounds in English and/or
their interpretation in Spanish. Our results show that the difference in English proficiency,
as well as some semantic properties, will condition the use of N-N compounds or other
structures.El inglés y el español son lenguas con diferentes características gramaticales. Una de ellas
es la gran capacidad productiva de la lengua inglesa en cuanto a compuestos nominales,
no siendo así en el caso de la española. Este trabajo aborda de forma empírica el estudio
de la producción y de la interpretación de compuestos nominales teniendo en cuenta
algunos de los aspectos sintácticos y semánticos de estas estructuras. En el experimento,
hablantes españoles (L1) con diferente formación en inglés (L2) han de nombrar varias
imágenes con un compuesto nominal. El análisis de las propiedades sintácticas y
semánticas de los compuestos nominales esperados aclarará cómo estos aspectos
condicionan la producción de dicha estructura en inglés y/o su interpretación en español.
En los resultados se muestra que tanto la diferencia en el dominio del inglés como algunas
propiedades semánticas condicionarán el uso de compuestos nominales u otras
estructuras.Departamento de Filología InglesaGrado en Estudios Inglese
Recommended from our members
AXEL: A framework to deal with ambiguity in three-noun compounds
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 6/12/2010.Cognitive Linguistics has been widely used to deal with the ambiguity generated by words in combination. Although this domain offers many solutions to address this challenge, not all of them can be implemented in a computational environment. The Dynamic Construal of Meaning framework is argued to have this ability because it describes an intrinsic degree of association of meanings, which in turn, can be translated into computational programs. A limitation towards a computational approach, however, has been the lack of syntactic parameters. This research argues that this limitation could be overcome with the aid of the Generative Lexicon Theory (GLT). Specifically, this dissertation formulated possible means to marry the GLT and Cognitive Linguistics in a novel rapprochement between the two.
This bond between opposing theories provided the means to design a computational template (the AXEL System) by realising syntax and semantics at software levels. An instance of the AXEL system was created using a Design Research approach. Planned iterations were involved in the development to improve artefact performance. Such iterations boosted performance-improving, which accounted for the degree of association of meanings in three-noun compounds.
This dissertation delivered three major contributions on the brink of a so-called turning point in Computational Linguistics (CL). First, the AXEL system was used to disclose hidden lexical patterns on ambiguity. These patterns are difficult, if not impossible, to be identified without automatic techniques. This research claimed that these patterns can assist audiences of linguists to review lexical knowledge on a software-based viewpoint.
Following linguistic awareness, the second result advocated for the adoption of improved resources by decreasing electronic space of Sense Enumerative Lexicons (SELs). The AXEL system deployed the generation of “at the moment of use” interpretations, optimising the way the space is needed for lexical storage.
Finally, this research introduced a subsystem of metrics to characterise an ambiguous degree of association of three-noun compounds enabling ranking methods. Weighing methods delivered mechanisms of classification of meanings towards Word Sense Disambiguation (WSD). Overall these results attempted to tackle difficulties in understanding studies of Lexical Semantics via software tools
Designing Statistical Language Learners: Experiments on Noun Compounds
The goal of this thesis is to advance the exploration of the statistical
language learning design space. In pursuit of that goal, the thesis makes two
main theoretical contributions: (i) it identifies a new class of designs by
specifying an architecture for natural language analysis in which probabilities
are given to semantic forms rather than to more superficial linguistic
elements; and (ii) it explores the development of a mathematical theory to
predict the expected accuracy of statistical language learning systems in terms
of the volume of data used to train them.
The theoretical work is illustrated by applying statistical language learning
designs to the analysis of noun compounds. Both syntactic and semantic analysis
of noun compounds are attempted using the proposed architecture. Empirical
comparisons demonstrate that the proposed syntactic model is significantly
better than those previously suggested, approaching the performance of human
judges on the same task, and that the proposed semantic model, the first
statistical approach to this problem, exhibits significantly better accuracy
than the baseline strategy. These results suggest that the new class of designs
identified is a promising one. The experiments also serve to highlight the need
for a widely applicable theory of data requirements.Comment: PhD thesis (Macquarie University, Sydney; December 1995), LaTeX
source, xii+214 page
Schema Normalization for Improving Schema Matching
Schema matching is the problem of finding relationships among concepts across heterogeneous data sources (heterogeneous in format and in structure). Starting from the \hidden meaning" associated to schema labels (i.e. class/attribute names) it is possible to discover relationships among the elements of different schemata. Lexical annotation (i.e. annotation w.r.t. a thesaurus/lexical resource) helps in associating a \u201cmeaning" to schema labels. However, accuracy of semi-automatic lexical annotation methods on real-world schemata suffers from the abundance of non-dictionary words such as compound nouns and word abbreviations.In this work, we address this problem by proposing a method to perform schema labels normalization which increases the number of comparable labels. Unlike other solutions, the method semi-automatically expands abbreviations and annotates compound terms, without a minimal manual effort. We empirically prove that our normalization method helps in the identification of similarities among schema elements of different data sources, thus improving schema matching accuracy
Statistical parsing of noun phrase structure
Noun phrases (NPs) are a crucial part of natural language, exhibiting in many cases an extremely complex structure. However, NP structure is largely ignored by the statistical parsing field, as the most widely-used corpus is not annotated with it. This lack of gold-standard data has restricted all previous efforts to parse NPs, making it impossible to perform the supervised experiments that have achieved high performance in so many Natural Language Processing (NLP) tasks. We comprehensively solve this problem by manually annotating NP structure for the entire Wall Street Journal section of the Penn Treebank. The inter-annotator agreement scores that we attain refute the belief that the task is too difficult, and demonstrate that consistent NP annotation is possible. Our gold-standard NP data is now available and will be useful for all parsers. We present three statistical methods for parsing NP structure. Firstly, we apply the Collins (2003) model, and find that its recovery of NP structure is significantly worse than its overall performance. Through much experimentation, we determine that this is not a result of the special base-NP model used by the parser, but primarily caused by a lack of lexical information. Secondly, we construct a wide-coverage, large-scale NP Bracketing system, applying a supervised model to achieve excellent results. Our Penn Treebank data set, which is orders of magnitude larger than those used previously, makes this possible for the first time. We then implement and experiment with a wide variety of features in order to determine an optimal model. Having achieved this, we use the NP Bracketing system to reanalyse NPs outputted by the Collins (2003) parser. Our post-processor outperforms this state-of-the-art parser. For our third model, we convert the NP data to CCGbank (Hockenmaier and Steedman, 2007), a corpus that uses the Combinatory Categorial Grammar (CCG) formalism. We experiment with a CCG parser and again, implement features that improve performance. We also evaluate the CCG parser against the Briscoe and Carroll (2006) reannotation of DepBank (King et al., 2003), another corpus that annotates NP structure. This supplies further evidence that parser performance is increased by improving the representation of NP structure. Finally, the error analysis we carry out on the CCG data shows that again, a lack of lexicalisation causes difficulties for the parser. We find that NPs are particularly reliant on this lexical information, due to their exceptional productivity and the reduced explicitness present in modifier sequences. Our results show that NP parsing is a significantly harder task than parsing in general. This thesis comprehensively analyses the NP parsing task. Our contributions allow wide-coverage, large-scale NP parsers to be constructed for the first time, and motivate further NP parsing research for the future. The results of our work can provide significant benefits for many NLP tasks, as the crucial information contained in NP structure is now available for all downstream systems
Book Review: Elisabeth O. Selkirk, The syntax of words
The Syntax of Words provides a new insight into the study of the structure of words and the system for generating that structure. This treatment is a firm departure from the hitherto traditional notion in the study of morphology which considers words as part of a language’s syntax or grammar and not having its own syntax. But as the author argues, the categories involved in word structure are distinct from those of syntactic structure, and that these two types of structures combine in significant ways. This study thus focuses on the word structure rules along with the structures they define. She presents a general theory of word structure which she exemplifies and defends by the application of facts about compounding and affixation. Her analysis centers solely on English
The translation of complex nominals in the field of air quality treatment
Este trabajo se centra en el estudio de la traducción de términos compuestos nominales, tanto desde el punto de vista del traductor humano como desde la óptica de los traductores automáticos, considerando aspectos como la ambigüedad estructural, la neología o la variación denominativa. Se propone un protocolo para la traducción humana de los compuestos nominales del inglés hacia el español
The Role of Lexical Morphology, In Light of Recent Developments.
In recent years there has been a growing interest in psycholinguistic approaches to modelling morphology. Theorists working within this framework claim that the formal theory of lexical stratification is untenable in light of recent discoveries. In order to address these claims, this paper engages closely with a number of lexical stratification models, with a particular focus on Giegerich’s base-driven stratal model, as well as a number of cognitive based approaches. A critical discussion of some “problematic” circumstances — which arise as a result of derivational suffixation as well as compounding — that have identified in the psycholinguistic and lexicalist literature reveals some interesting similarities between the stratal model and the cognitive approaches. To investigate these apparent similarities, this paper examines a number of theories that model the way words are accessed from the mental lexicon, and their applicability to the stratal model. Finally, key data from a number of neuro-imaging studies is brought to bear upon the stratal model. Engaging closely with this data, it became clear that the neuro-linguistic findings are not incompatible with the features of stratal models. By exploiting this data, some ideas regarding a potential synthesis between the two theoretical frameworks are tentatively put forward, and some key issues are highlighted as possible areas of interest for future research
- …