2 research outputs found
Recommended from our members
Auxiliary Objectives for Neural Error Detection Models
We investigate the utility of different auxiliary
objectives and training strategies
within a neural sequence labeling approach
to error detection in learner writing.
Auxiliary costs provide the model
with additional linguistic information, allowing
it to learn general-purpose compositional
features that can then be exploited
for other objectives. Our experiments
show that a joint learning approach
trained with parallel labels on in-domain
data improves performance over the previous
best error detection system. While
the resulting model has the same number
of parameters, the additional objectives allow
it to be optimised more efficiently and
achieve better performance
Recommended from our members
Neural Sequence-Labelling Models for Grammatical Error Correction
We propose an approach to N-best list reranking
using neural sequence-labelling
models. We train a compositional model
for error detection that calculates the probability
of each token in a sentence being
correct or incorrect, utilising the full sentence
as context. Using the error detection
model, we then re-rank the N best
hypotheses generated by statistical machine
translation systems. Our approach
achieves state-of-the-art results on error
correction for three different datasets, and
it has the additional advantage of only using
a small set of easily computed features
that require no linguistic input