54 research outputs found
The role of syntactic dependencies in compositional distributional semantics
This article provides a preliminary semantic framework for Dependency Grammar in which lexical words are semantically defined as contextual distributions (sets of contexts) while syntactic dependencies are compositional operations on word distributions. More precisely, any syntactic dependency uses the contextual distribution of the dependent word to restrict the distribution of the head, and makes use of the contextual distribution of the head to restrict that of the dependent word. The interpretation of composite expressions and sentences, which are analyzed as a tree of binary dependencies, is performed by restricting the contexts of words dependency by dependency in a left-to-right incremental way. Consequently, the meaning of the whole composite expression or sentence is not a single representation, but a list of contextualized senses, namely the restricted distributions of its constituent (lexical) words. We report the results of two large-scale corpus-based experiments on two different natural language processing applications: paraphrasing and compositional translationThis work is funded by Project TELPARES, Ministry of Economy and Competitiveness (FFI2014-51978-C2-1-R), and the program “Ayuda Fundación BBVA a Investigadores y Creadores Culturales 2016”S
Distributed Representations for Compositional Semantics
The mathematical representation of semantics is a key issue for Natural
Language Processing (NLP). A lot of research has been devoted to finding ways
of representing the semantics of individual words in vector spaces.
Distributional approaches --- meaning distributed representations that exploit
co-occurrence statistics of large corpora --- have proved popular and
successful across a number of tasks. However, natural language usually comes in
structures beyond the word level, with meaning arising not only from the
individual words but also the structure they are contained in at the phrasal or
sentential level. Modelling the compositional process by which the meaning of
an utterance arises from the meaning of its parts is an equally fundamental
task of NLP.
This dissertation explores methods for learning distributed semantic
representations and models for composing these into representations for larger
linguistic units. Our underlying hypothesis is that neural models are a
suitable vehicle for learning semantically rich representations and that such
representations in turn are suitable vehicles for solving important tasks in
natural language processing. The contribution of this thesis is a thorough
evaluation of our hypothesis, as part of which we introduce several new
approaches to representation learning and compositional semantics, as well as
multiple state-of-the-art models which apply distributed semantic
representations to various tasks in NLP.Comment: DPhil Thesis, University of Oxford, Submitted and accepted in 201
Negation and Speculation in NLP: A Survey, Corpora, Methods, and Applications
Negation and speculation are universal linguistic phenomena that affect the performance of Natural Language Processing (NLP) applications, such as those for opinion mining and information retrieval, especially in biomedical data. In this article, we review the corpora annotated with negation and speculation in various natural languages and domains. Furthermore, we discuss the ongoing research into recent rule-based, supervised, and transfer learning techniques for the detection of negating and speculative content. Many English corpora for various domains are now annotated with negation and speculation; moreover, the availability of annotated corpora in other languages has started to increase. However, this growth is insufficient to address these important phenomena in languages with limited resources. The use of cross-lingual models and translation of the well-known languages are acceptable alternatives. We also highlight the lack of consistent annotation guidelines and the shortcomings of the existing techniques, and suggest alternatives that may speed up progress in this research direction. Adding more syntactic features may alleviate the limitations of the existing techniques, such as cue ambiguity and detecting the discontinuous scopes. In some NLP applications, inclusion of a system that is negation- and speculation-aware improves performance, yet this aspect is still not addressed or considered an essential step
Semantic Tagging for the Urdu Language:Annotated Corpus and Multi-Target Classification Methods
Extracting and analysing meaning-related information from natural language data has attracted the attention of researchers in various fields, such as natural language processing, corpus linguistics, information retrieval, and data science. An important aspect of such automatic information extraction and analysis is the annotation of language data using semantic tagging tools. Different semantic tagging tools have been designed to carry out various levels of semantic analysis, for instance, named entity recognition and disambiguation, sentiment analysis, word sense disambiguation, content analysis, and semantic role labelling. Common to all of these tasks, in the supervised setting, is the requirement for a manually semantically annotated corpus, which acts as a knowledge base from which to train and test potential word and phrase-level sense annotations. Many benchmark corpora have been developed for various semantic tagging tasks, but most are for English and other European languages. There is a dearth of semantically annotated corpora for the Urdu language, which is widely spoken and used around the world. To fill this gap, this study presents a large benchmark corpus and methods for the semantic tagging task for the Urdu language. The proposed corpus contains 8,000 tokens in the following domains or genres: news, social media, Wikipedia, and historical text (each domain having 2K tokens). The corpus has been manually annotated with 21 major semantic fields and 232 sub-fields with the USAS (UCREL Semantic Analysis System) semantic taxonomy which provides a comprehensive set of semantic fields for coarse-grained annotation. Each word in our proposed corpus has been annotated with at least one and up to nine semantic field tags to provide a detailed semantic analysis of the language data, which allowed us to treat the problem of semantic tagging as a supervised multi-target classification task. To demonstrate how our proposed corpus can be used for the development and evaluation of Urdu semantic tagging methods, we extracted local, topical and semantic features from the proposed corpus and applied seven different supervised multi-target classifiers to them. Results show an accuracy of 94% on our proposed corpus which is free and publicly available to download
Extracting Temporal and Causal Relations between Events
Structured information resulting from temporal information processing is
crucial for a variety of natural language processing tasks, for instance to
generate timeline summarization of events from news documents, or to answer
temporal/causal-related questions about some events. In this thesis we present
a framework for an integrated temporal and causal relation extraction system.
We first develop a robust extraction component for each type of relations, i.e.
temporal order and causality. We then combine the two extraction components
into an integrated relation extraction system, CATENA---CAusal and Temporal
relation Extraction from NAtural language texts---, by utilizing the
presumption about event precedence in causality, that causing events must
happened BEFORE resulting events. Several resources and techniques to improve
our relation extraction systems are also discussed, including word embeddings
and training data expansion. Finally, we report our adaptation efforts of
temporal information processing for languages other than English, namely
Italian and Indonesian.Comment: PhD Thesi
Combining Representation Learning with Logic for Language Processing
The current state-of-the-art in many natural language processing and
automated knowledge base completion tasks is held by representation learning
methods which learn distributed vector representations of symbols via
gradient-based optimization. They require little or no hand-crafted features,
thus avoiding the need for most preprocessing steps and task-specific
assumptions. However, in many cases representation learning requires a large
amount of annotated training data to generalize well to unseen data. Such
labeled training data is provided by human annotators who often use formal
logic as the language for specifying annotations. This thesis investigates
different combinations of representation learning methods with logic for
reducing the need for annotated training data, and for improving
generalization.Comment: PhD Thesis, University College London, Submitted and accepted in 201
- …