30,090 research outputs found
Learning to Compose Task-Specific Tree Structures
For years, recursive neural networks (RvNNs) have been shown to be suitable
for representing text into fixed-length vectors and achieved good performance
on several natural language processing tasks. However, the main drawback of
RvNNs is that they require structured input, which makes data preparation and
model implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel
tree-structured long short-term memory architecture that learns how to compose
task-specific tree structures only from plain text data efficiently. Our model
uses Straight-Through Gumbel-Softmax estimator to decide the parent node among
candidates dynamically and to calculate gradients of the discrete decision. We
evaluate the proposed model on natural language inference and sentiment
analysis, and show that our model outperforms or is at least comparable to
previous models. We also find that our model converges significantly faster
than other models.Comment: AAAI 201
Data Augmentation for Spoken Language Understanding via Joint Variational Generation
Data scarcity is one of the main obstacles of domain adaptation in spoken
language understanding (SLU) due to the high cost of creating manually tagged
SLU datasets. Recent works in neural text generative models, particularly
latent variable models such as variational autoencoder (VAE), have shown
promising results in regards to generating plausible and natural sentences. In
this paper, we propose a novel generative architecture which leverages the
generative power of latent variable models to jointly synthesize fully
annotated utterances. Our experiments show that existing SLU models trained on
the additional synthetic examples achieve performance gains. Our approach not
only helps alleviate the data scarcity issue in the SLU task for many datasets
but also indiscriminately improves language understanding performances for
various SLU models, supported by extensive experiments and rigorous statistical
testing.Comment: 8 pages, 3 figures, 4 tables, Accepted in AAAI201
- …
