56 research outputs found
Adversarial Variational Embedding for Robust Semi-supervised Learning
Semi-supervised learning is sought for leveraging the unlabelled data when
labelled data is difficult or expensive to acquire. Deep generative models
(e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial
Networks (GANs) have recently shown promising performance in semi-supervised
classification for the excellent discriminative representing ability. However,
the latent code learned by the traditional VAE is not exclusive (repeatable)
for a specific input sample, which prevents it from excellent classification
performance. In particular, the learned latent representation depends on a
non-exclusive component which is stochastically sampled from the prior
distribution. Moreover, the semi-supervised GAN models generate data from
pre-defined distribution (e.g., Gaussian noises) which is independent of the
input data distribution and may obstruct the convergence and is difficult to
control the distribution of the generated data. To address the aforementioned
issues, we propose a novel Adversarial Variational Embedding (AVAE) framework
for robust and effective semi-supervised learning to leverage both the
advantage of GAN as a high quality generative model and VAE as a posterior
distribution learner. The proposed approach first produces an exclusive latent
code by the model which we call VAE++, and meanwhile, provides a meaningful
prior distribution for the generator of GAN. The proposed approach is evaluated
over four different real-world applications and we show that our method
outperforms the state-of-the-art models, which confirms that the combination of
VAE++ and GAN can provide significant improvements in semisupervised
classification.Comment: 9 pages, Accepted by Research Track in KDD 201
TACAM: Topic And Context Aware Argument Mining
In this work we address the problem of argument search. The purpose of
argument search is the distillation of pro and contra arguments for requested
topics from large text corpora. In previous works, the usual approach is to use
a standard search engine to extract text parts which are relevant to the given
topic and subsequently use an argument recognition algorithm to select
arguments from them. The main challenge in the argument recognition task, which
is also known as argument mining, is that often sentences containing arguments
are structurally similar to purely informative sentences without any stance
about the topic. In fact, they only differ semantically. Most approaches use
topic or search term information only for the first search step and therefore
assume that arguments can be classified independently of a topic. We argue that
topic information is crucial for argument mining, since the topic defines the
semantic context of an argument. Precisely, we propose different models for the
classification of arguments, which take information about a topic of an
argument into account. Moreover, to enrich the context of a topic and to let
models understand the context of the potential argument better, we integrate
information from different external sources such as Knowledge Graphs or
pre-trained NLP models. Our evaluation shows that considering topic
information, especially in connection with external information, provides a
significant performance boost for the argument mining task
- …