6,396 research outputs found
Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition
We describe the CoNLL-2002 shared task: language-independent named entity
recognition. We give background information on the data sets and the evaluation
method, present a general overview of the systems that have taken part in the
task and discuss their performance.Comment: 4 page
An attentive neural architecture for joint segmentation and parsing and its application to real estate ads
In processing human produced text using natural language processing (NLP)
techniques, two fundamental subtasks that arise are (i) segmentation of the
plain text into meaningful subunits (e.g., entities), and (ii) dependency
parsing, to establish relations between subunits. In this paper, we develop a
relatively simple and effective neural joint model that performs both
segmentation and dependency parsing together, instead of one after the other as
in most state-of-the-art works. We will focus in particular on the real estate
ad setting, aiming to convert an ad to a structured description, which we name
property tree, comprising the tasks of (1) identifying important entities of a
property (e.g., rooms) from classifieds and (2) structuring them into a tree
format. In this work, we propose a new joint model that is able to tackle the
two tasks simultaneously and construct the property tree by (i) avoiding the
error propagation that would arise from the subtasks one after the other in a
pipelined fashion, and (ii) exploiting the interactions between the subtasks.
For this purpose, we perform an extensive comparative study of the pipeline
methods and the new proposed joint model, reporting an improvement of over
three percentage points in the overall edge F1 score of the property tree.
Also, we propose attention methods, to encourage our model to focus on salient
tokens during the construction of the property tree. Thus we experimentally
demonstrate the usefulness of attentive neural architectures for the proposed
joint model, showcasing a further improvement of two percentage points in edge
F1 score for our application.Comment: Preprint - Accepted for publication in Expert Systems with
Application
Applying Stacking and Corpus Transformation to a Chunking Task
In this paper we present an application of the stacking technique
to a chunking task: named entity recognition. Stacking consists in
applying machine learning techniques for combining the results of different
models. Instead of using several corpus or several tagger generators
to obtain the models needed in stacking, we have applied three transformations
to a single training corpus and then we have used the four versions
of the corpus to train a single tagger generator. Taking as baseline
the results obtained with the original corpus (Fβ=1 value of 81.84), our
experiments show that the three transformations improve this baseline
(the best one reaches 84.51), and that applying stacking also improves
this baseline reaching an Fβ=1 measure of 88.43
- …