12,793 research outputs found
Connectionist Inference Models
The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling
Analysing Errors of Open Information Extraction Systems
We report results on benchmarking Open Information Extraction (OIE) systems
using RelVis, a toolkit for benchmarking Open Information Extraction systems.
Our comprehensive benchmark contains three data sets from the news domain and
one data set from Wikipedia with overall 4522 labeled sentences and 11243
binary or n-ary OIE relations. In our analysis on these data sets we compared
the performance of four popular OIE systems, ClausIE, OpenIE 4.2, Stanford
OpenIE and PredPatt. In addition, we evaluated the impact of five common error
classes on a subset of 749 n-ary tuples. From our deep analysis we unreveal
important research directions for a next generation of OIE systems.Comment: Accepted at Building Linguistically Generalizable NLP Systems at
EMNLP 201
Graphene: Semantically-Linked Propositions in Open Information Extraction
We present an Open Information Extraction (IE) approach that uses a
two-layered transformation stage consisting of a clausal disembedding layer and
a phrasal disembedding layer, together with rhetorical relation identification.
In that way, we convert sentences that present a complex linguistic structure
into simplified, syntactically sound sentences, from which we can extract
propositions that are represented in a two-layered hierarchy in the form of
core relational tuples and accompanying contextual information which are
semantically linked via rhetorical relations. In a comparative evaluation, we
demonstrate that our reference implementation Graphene outperforms
state-of-the-art Open IE systems in the construction of correct n-ary
predicate-argument structures. Moreover, we show that existing Open IE
approaches can benefit from the transformation process of our framework.Comment: 27th International Conference on Computational Linguistics (COLING
2018
Treebank-based acquisition of wide-coverage, probabilistic LFG resources: project overview, results and evaluation
This paper presents an overview of a project to acquire wide-coverage, probabilistic Lexical-Functional Grammar
(LFG) resources from treebanks. Our approach is based on an automatic annotation algorithm that annotates ārawā treebank trees with LFG f-structure information approximating to basic predicate-argument/dependency structure. From the f-structure-annotated treebank
we extract probabilistic unification grammar resources. We present the annotation algorithm, the extraction of
lexical information and the acquisition of wide-coverage and robust PCFG-based LFG approximations including
long-distance dependency resolution.
We show how the methodology can be applied to multilingual, treebank-based unification grammar acquisition. Finally
we show how simple (quasi-)logical forms can be derived automatically from the f-structures generated for the treebank trees
Extraction of Transcript Diversity from Scientific Literature
Transcript diversity generated by alternative splicing and associated mechanisms contributes heavily to the functional complexity of biological systems. The numerous examples of the mechanisms and functional implications of these events are scattered throughout the scientific literature. Thus, it is crucial to have a tool that can automatically extract the relevant facts and collect them in a knowledge base that can aid the interpretation of data from high-throughput methods. We have developed and applied a composite text-mining method for extracting information on transcript diversity from the entire MEDLINE database in order to create a database of genes with alternative transcripts. It contains information on tissue specificity, number of isoforms, causative mechanisms, functional implications, and experimental methods used for detection. We have mined this resource to identify 959 instances of tissue-specific splicing. Our results in combination with those from EST-based methods suggest that alternative splicing is the preferred mechanism for generating transcript diversity in the nervous system. We provide new annotations for 1,860 genes with the potential for generating transcript diversity. We assign the MeSH term āalternative splicingā to 1,536 additional abstracts in the MEDLINE database and suggest new MeSH terms for other events. We have successfully extracted information about transcript diversity and semiautomatically generated a database, LSAT, that can provide a quantitative understanding of the mechanisms behind tissue-specific gene expression. LSAT (Literature Support for Alternative Transcripts) is publicly available at http://www.bork.embl.de/LSAT/
Graphene: A Context-Preserving Open Information Extraction System
We introduce Graphene, an Open IE system whose goal is to generate accurate,
meaningful and complete propositions that may facilitate a variety of
downstream semantic applications. For this purpose, we transform syntactically
complex input sentences into clean, compact structures in the form of core
facts and accompanying contexts, while identifying the rhetorical relations
that hold between them in order to maintain their semantic relationship. In
that way, we preserve the context of the relational tuples extracted from a
source sentence, generating a novel lightweight semantic representation for
Open IE that enhances the expressiveness of the extracted propositions.Comment: 27th International Conference on Computational Linguistics (COLING
2018
Treebank-based acquisition of a Chinese lexical-functional grammar
Scaling wide-coverage, constraint-based grammars such as Lexical-Functional Grammars (LFG) (Kaplan and Bresnan, 1982; Bresnan, 2001) or Head-Driven Phrase Structure Grammars (HPSG) (Pollard and Sag, 1994) from fragments to naturally occurring unrestricted text is knowledge-intensive, time-consuming and (often prohibitively) expensive. A number of researchers have recently presented methods to automatically acquire wide-coverage, probabilistic constraint-based grammatical resources from treebanks (Cahill et al., 2002, Cahill et al., 2003; Cahill et al., 2004; Miyao et al., 2003; Miyao et al., 2004; Hockenmaier and Steedman, 2002; Hockenmaier, 2003), addressing the knowledge acquisition bottleneck in constraint-based grammar development. Research to date has concentrated on English and German. In this paper we report on an experiment to induce wide-coverage, probabilistic LFG grammatical and lexical resources for Chinese from the Penn Chinese Treebank (CTB) (Xue et al., 2002) based on an automatic f-structure annotation algorithm. Currently 96.751% of the CTB trees receive a single, covering and connected f-structure, 0.112% do not receive an f-structure due to feature clashes, while 3.137% are associated with multiple f-structure fragments. From the f-structure-annotated CTB we extract a total of 12975 lexical entries with 20 distinct subcategorisation frame types. Of these 3436 are verbal entries with a total of 11 different frame types. We extract a number of PCFG-based LFG approximations. Currently our best automatically induced grammars achieve an f-score of 81.57% against the trees in unseen articles 301-325; 86.06% f-score (all grammatical functions) and 73.98% (preds-only) against the dependencies derived from the f-structures automatically generated for the original trees in 301-325 and 82.79% (all grammatical functions) and 67.74% (preds-only) against the dependencies derived from the manually annotated gold-standard f-structures for 50 trees randomly selected from articles 301-325
- ā¦