18 research outputs found
End-to-end neural relation extraction using deep biaffine attention
We propose a neural network model for joint extraction of named entities and
relations between them, without any hand-crafted features. The key contribution
of our model is to extend a BiLSTM-CRF-based entity recognition model with a
deep biaffine attention layer to model second-order interactions between latent
features for relation classification, specifically attending to the role of an
entity in a directional relationship. On the benchmark "relation and entity
recognition" dataset CoNLL04, experimental results show that our model
outperforms previous models, producing new state-of-the-art performances.Comment: Proceedings of the 41st European Conference on Information Retrieval
(ECIR 2019), to appea
A Relational Triple Extraction Method Based on Feature Reasoning for Technological Patents
The relation triples extraction method based on table filling can address the
issues of relation overlap and bias propagation. However, most of them only
establish separate table features for each relationship, which ignores the
implicit relationship between different entity pairs and different relationship
features. Therefore, a feature reasoning relational triple extraction method
based on table filling for technological patents is proposed to explore the
integration of entity recognition and entity relationship, and to extract
entity relationship triples from multi-source scientific and technological
patents data. Compared with the previous methods, the method we proposed for
relational triple extraction has the following advantages: 1) The table filling
method that saves more running space enhances the speed and efficiency of the
model. 2) Based on the features of existing token pairs and table relations,
reasoning the implicit relationship features, and improve the accuracy of
triple extraction. On five benchmark datasets, we evaluated the model we
suggested. The result suggest that our model is advanced and effective, and it
performed well on most of these datasets
A Boundary Determined Neural Model for Relation Extraction
Existing models extract entity relations only after two entity spans have been precisely extracted that influenced the performance of relation extraction. Compared with recognizing entity spans, because the boundary has a small granularity and a less ambiguity, it can be detected precisely and incorporated to learn better representation. Motivated by the strengths of boundary, we propose a boundary determined neural (BDN) model, which leverages boundaries as task-related cues to predict the relation labels. Our model can predict high-quality relation instance via the pairs of boundaries, which can relieve error propagation problem. Moreover, our model fuses with boundary-relevant information encoding to represent distributed representation to improve the ability of capturing semantic and dependency information, which can increase the discriminability of neural network. Experiments show that our model achieves state-of-the-art performances on ACE05 corpus
Neural architectures for open-type relation argument extraction
In this work, we focus on the task of open-type relation argument extraction (ORAE): given a corpus, a query entity Q, and a knowledge base relation (e.g., “Q authored notable work with title X”), the model has to extract an argument of non-standard entity type (entities that cannot be extracted by a standard named entity tagger, for example, X: the title of a book or a work of art) from the corpus. We develop and compare a wide range of neural models for this task yielding large improvements over a strong baseline obtained with a neural question answering system. The impact of different sentence encoding architectures and answer extraction methods is systematically compared. An encoder based on gated recurrent units combined with a conditional random fields tagger yields the best results. We release a data set to train and evaluate ORAE, based on Wikidata and obtained by distant supervision