1,332 research outputs found
Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding
Abstractive community detection is an important spoken language understanding
task, whose goal is to group utterances in a conversation according to whether
they can be jointly summarized by a common abstractive sentence. This paper
provides a novel approach to this task. We first introduce a neural contextual
utterance encoder featuring three types of self-attention mechanisms. We then
train it using the siamese and triplet energy-based meta-architectures.
Experiments on the AMI corpus show that our system outperforms multiple
energy-based and non-energy based baselines from the state-of-the-art. Code and
data are publicly available.Comment: Update baseline
Named Entity Recognition in Electronic Health Records Using Transfer Learning Bootstrapped Neural Networks
Neural networks (NNs) have become the state of the art in many machine
learning applications, especially in image and sound processing [1]. The same,
although to a lesser extent [2,3], could be said in natural language processing
(NLP) tasks, such as named entity recognition. However, the success of NNs
remains dependent on the availability of large labelled datasets, which is a
significant hurdle in many important applications. One such case are electronic
health records (EHRs), which are arguably the largest source of medical data,
most of which lies hidden in natural text [4,5]. Data access is difficult due
to data privacy concerns, and therefore annotated datasets are scarce. With
scarce data, NNs will likely not be able to extract this hidden information
with practical accuracy. In our study, we develop an approach that solves these
problems for named entity recognition, obtaining 94.6 F1 score in I2B2 2009
Medical Extraction Challenge [6], 4.3 above the architecture that won the
competition. Beyond the official I2B2 challenge, we further achieve 82.4 F1 on
extracting relationships between medical terms. To reach this state-of-the-art
accuracy, our approach applies transfer learning to leverage on datasets
annotated for other I2B2 tasks, and designs and trains embeddings that
specially benefit from such transfer.Comment: 11 pages, 4 figures, 8 table
Multimodal Grounding for Sequence-to-Sequence Speech Recognition
Humans are capable of processing speech by making use of multiple sensory
modalities. For example, the environment where a conversation takes place
generally provides semantic and/or acoustic context that helps us to resolve
ambiguities or to recall named entities. Motivated by this, there have been
many works studying the integration of visual information into the speech
recognition pipeline. Specifically, in our previous work, we propose a
multistep visual adaptive training approach which improves the accuracy of an
audio-based Automatic Speech Recognition (ASR) system. This approach, however,
is not end-to-end as it requires fine-tuning the whole model with an adaptation
layer. In this paper, we propose novel end-to-end multimodal ASR systems and
compare them to the adaptive approach by using a range of visual
representations obtained from state-of-the-art convolutional neural networks.
We show that adaptive training is effective for S2S models leading to an
absolute improvement of 1.4% in word error rate. As for the end-to-end systems,
although they perform better than baseline, the improvements are slightly less
than adaptive training, 0.8 absolute WER reduction in single-best models. Using
ensemble decoding, end-to-end models reach a WER of 15% which is the lowest
score among all systems.Comment: ICASSP 201
- …