331 research outputs found
REflex: Flexible Framework for Relation Extraction in Multiple Domains
Systematic comparison of methods for relation extraction (RE) is difficult
because many experiments in the field are not described precisely enough to be
completely reproducible and many papers fail to report ablation studies that
would highlight the relative contributions of their various combined
techniques. In this work, we build a unifying framework for RE, applying this
on three highly used datasets (from the general, biomedical and clinical
domains) with the ability to be extendable to new datasets. By performing a
systematic exploration of modeling, pre-processing and training methodologies,
we find that choices of pre-processing are a large contributor performance and
that omission of such information can further hinder fair comparison. Other
insights from our exploration allow us to provide recommendations for future
research in this area.Comment: accepted by BioNLP 2019 at the Association of Computation Linguistics
201
Linguistic Representation and Processing of Copredication
This thesis addresses the lexical and psycholinguistic properties of copredication. In particular, it explores its acceptability, frequency, crosslinguistic and electrophysiological features. It proposes a general parsing bias to account for novel acceptability data, through which Complex-Simple predicate orderings are degraded across distinct nominal types relative to the reverse order. This bias, Incremental Semantic Complexity, states that the parser seeks to process linguistic representations in incremental stages of semantic complexity. English and Italian acceptability data are presented which demonstrate that predicate order preferences are based not on sense dominance but rather sense complexity. Initial evidence is presented indicating that pragmatic factors centred on coherence relations can impact copredication acceptability when such copredications host complex (but not simple) predicates. The real-time processing and electrophysiological properties of copredication are also presented, which serve to replicate and ground the acceptability dynamics presented in the thesis
Extracting Causal Claims from Information Systems Papers with Natural Language Processing for Theory Ontology Learning
The number of scientific papers published each year is growing exponentially. How can computational tools support scientists to better understand and process this data? This paper presents a software-prototype that automatically extracts causes, effects, signs, moderators, mediators, conditions, and interaction signs from propositions and hypotheses of full-text scientific papers. This prototype uses natural language processing methods and a set of linguistic rules for causal information extraction. The prototype is evaluated on a manually annotated corpus of 270 Information Systems papers containing 723 hypotheses and propositions from the AIS basket of eight. F1-results for the detection and extraction of different causal variables range between 0.71 and 0.90. The presented automatic causal theory extraction allows for the analysis of scientific papers based on a theory ontology and therefore contributes to the creation and comparison of inter-nomological networks
The Latent Relation Mapping Engine: Algorithm and Experiments
Many AI researchers and cognitive scientists have argued that analogy is the
core of cognition. The most influential work on computational modeling of
analogy-making is Structure Mapping Theory (SMT) and its implementation in the
Structure Mapping Engine (SME). A limitation of SME is the requirement for
complex hand-coded representations. We introduce the Latent Relation Mapping
Engine (LRME), which combines ideas from SME and Latent Relational Analysis
(LRA) in order to remove the requirement for hand-coded representations. LRME
builds analogical mappings between lists of words, using a large corpus of raw
text to automatically discover the semantic relations among the words. We
evaluate LRME on a set of twenty analogical mapping problems, ten based on
scientific analogies and ten based on common metaphors. LRME achieves
human-level performance on the twenty problems. We compare LRME with a variety
of alternative approaches and find that they are not able to reach the same
level of performance.Comment: related work available at http://purl.org/peter.turney
Recommended from our members
Semantic chunking
Long sentences pose a challenge for natural language processing (NLP) applications. They are associated with a complex information structure leading to increased requirements for processing resources. Although the issue is present in many areas of research, there is little uniformity in the solutions used by research communities dedicated to individual NLP applications. Different aspects of the problem are addressed by different tasks, such as sentence simplification or shallow chunking.
The main contribution of this thesis is the introduction of the task of semantic chunking as a general approach to reducing the cost of processing long sentences. The goal of semantic chunking is to find semantically contained fragments of a sentence representation that can be processed independently and recombined without loss of information. We anchor its principles in established concepts of semantic theory, in particular event and situation semantics. Most of the experiments in this thesis focus on semantic chunking defined on complex semantic representations in Dependency Minimal Recursion Semantics (DMRS),
but we also demonstrate that the task can be performed on sentence strings. We present three chunking models: a) rule-based proof-of-concept DMRS chunking system; b) a semi-supervised sequence labelling neural model for surface semantic chunking; c) a system capable of finding semantic chunk boundaries based on the inherent structure of DMRS graphs, generalisable in the form of descriptive templates. We show how semantic chunking can be applied within a divide-and-conquer processing paradigm, using as an example the task of realization from DMRS. The application of semantic chunking yields noticeable efficiency gains without decreasing the quality of results
The semantic transparency of English compound nouns
What is semantic transparency, why is it important, and which factors play a role in its assessment? This work approaches these questions by investigating English compound nouns. The first part of the book gives an overview of semantic transparency in the analysis of compound nouns, discussing its role in models of morphological processing and differentiating it from related notions. After a chapter on the semantic analysis of complex nominals, it closes with a chapter on previous attempts to model semantic transparency. The second part introduces new empirical work on semantic transparency, introducing two different sets of statistical models for compound transparency. In particular, two semantic factors were explored: the semantic relations holding between compound constituents and the role of different readings of the constituents and the whole compound, operationalized in terms of meaning shifts and in terms of the distribution of specifc readings across constituent families. All semantic annotations used in the book are freely available
The semantic transparency of English compound nouns
What is semantic transparency, why is it important, and which factors play a role in its assessment? This work approaches these questions by investigating English compound nouns. The first part of the book gives an overview of semantic transparency in the analysis of compound nouns, discussing its role in models of morphological processing and differentiating it from related notions. After a chapter on the semantic analysis of complex nominals, it closes with a chapter on previous attempts to model semantic transparency. The second part introduces new empirical work on semantic transparency, introducing two different sets of statistical models for compound transparency. In particular, two semantic factors were explored: the semantic relations holding between compound constituents and the role of different readings of the constituents and the whole compound, operationalized in terms of meaning shifts and in terms of the distribution of specifc readings across constituent families
- …