107 research outputs found
Counterfactual Learning from Bandit Feedback under Deterministic Logging: A Case Study in Statistical Machine Translation
The goal of counterfactual learning for statistical machine translation (SMT)
is to optimize a target SMT system from logged data that consist of user
feedback to translations that were predicted by another, historic SMT system. A
challenge arises by the fact that risk-averse commercial SMT systems
deterministically log the most probable translation. The lack of sufficient
exploration of the SMT output space seemingly contradicts the theoretical
requirements for counterfactual learning. We show that counterfactual learning
from deterministic bandit logs is possible nevertheless by smoothing out
deterministic components in learning. This can be achieved by additive and
multiplicative control variates that avoid degenerate behavior in empirical
risk minimization. Our simulation experiments show improvements of up to 2 BLEU
points by counterfactual learning from deterministic bandit feedback.Comment: Conference on Empirical Methods in Natural Language Processing
(EMNLP), 2017, Copenhagen, Denmar
State-Regularized Recurrent Neural Networks to Extract Automata and Explain Predictions
Recurrent neural networks are a widely used class of neural architectures.
They have, however, two shortcomings. First, they are often treated as
black-box models and as such it is difficult to understand what exactly they
learn as well as how they arrive at a particular prediction. Second, they tend
to work poorly on sequences requiring long-term memorization, despite having
this capacity in principle. We aim to address both shortcomings with a class of
recurrent networks that use a stochastic state transition mechanism between
cell applications. This mechanism, which we term state-regularization, makes
RNNs transition between a finite set of learnable states. We evaluate
state-regularized RNNs on (1) regular languages for the purpose of automata
extraction; (2) non-regular languages such as balanced parentheses and
palindromes where external memory is required; and (3) real-word sequence
learning tasks for sentiment analysis, visual object recognition and text
categorisation. We show that state-regularization (a) simplifies the extraction
of finite state automata that display an RNN's state transition dynamic; (b)
forces RNNs to operate more like automata with external memory and less like
finite state machines, which potentiality leads to a more structural memory;
(c) leads to better interpretability and explainability of RNNs by leveraging
the probabilistic finite state transition mechanism over time steps.Comment: To appear at IEEE Transactions on Pattern Analysis and Machine
Intelligence. The extended version of State-Regularized Recurrent Neural
Networks [arXiv:1901.08817
Explaining Neural Matrix Factorization with Gradient Rollback
Explaining the predictions of neural black-box models is an important
problem, especially when such models are used in applications where user trust
is crucial. Estimating the influence of training examples on a learned neural
model's behavior allows us to identify training examples most responsible for a
given prediction and, therefore, to faithfully explain the output of a
black-box model. The most generally applicable existing method is based on
influence functions, which scale poorly for larger sample sizes and models.
We propose gradient rollback, a general approach for influence estimation,
applicable to neural models where each parameter update step during gradient
descent touches a smaller number of parameters, even if the overall number of
parameters is large. Neural matrix factorization models trained with gradient
descent are part of this model class. These models are popular and have found a
wide range of applications in industry. Especially knowledge graph embedding
methods, which belong to this class, are used extensively. We show that
gradient rollback is highly efficient at both training and test time. Moreover,
we show theoretically that the difference between gradient rollback's influence
approximation and the true influence on a model's behavior is smaller than
known bounds on the stability of stochastic gradient descent. This establishes
that gradient rollback is robustly estimating example influence. We also
conduct experiments which show that gradient rollback provides faithful
explanations for knowledge base completion and recommender datasets.Comment: 35th AAAI Conference on Artificial Intelligence, 2021. Includes
Appendi
Attending to Future Tokens For Bidirectional Sequence Generation
Neural sequence generation is typically performed token-by-token and
left-to-right. Whenever a token is generated only previously produced tokens
are taken into consideration. In contrast, for problems such as sequence
classification, bidirectional attention, which takes both past and future
tokens into consideration, has been shown to perform much better. We propose to
make the sequence generation process bidirectional by employing special
placeholder tokens. Treated as a node in a fully connected graph, a placeholder
token can take past and future tokens into consideration when generating the
actual output token. We verify the effectiveness of our approach experimentally
on two conversational tasks where the proposed bidirectional model outperforms
competitive baselines by a large margin.Comment: Conference on Empirical Methods in Natural Language Processing
(EMNLP), 2019, Hong Kong, Chin
Response-Based and Counterfactual Learning for Sequence-to-Sequence Tasks in NLP
Many applications nowadays rely on statistical machine-learnt models, such as a rising
number of virtual personal assistants. To train statistical models, typically large amounts
of labelled data are required which are expensive and difficult to obtain. In this thesis, we
investigate two approaches that alleviate the need for labelled data by leveraging feedback to model outputs instead. Both scenarios are applied to two sequence-to-sequence
tasks for Natural Language Processing (NLP): machine translation and semantic parsing
for question-answering. Additionally, we define a new question-answering task based on
the geographical database OpenStreetMap (OSM) and collect a corpus, NLmaps v2, with
28,609 question-parse pairs. With the corpus, we build semantic parsers for subsequent experiments. Furthermore, we are the first to design a natural language interface to OSM, for
which we specifically tailor a parser.
The first approach to learn from feedback given to model outputs, considers a scenario
where weak supervision is available by grounding the model in a downstream task for
which labelled data has been collected. Feedback obtained from the downstream task is
used to improve the model in a response-based on-policy learning setup. We apply this
approach to improve a machine translation system, which is grounded in a multilingual
semantic parsing task, by employing ramp loss objectives. Next, we improve a neural semantic parser where only gold answers, but not gold parses, are available, by lifting ramp
loss objectives to non-linear neural networks. In the second approach to learn from feedback, instead of collecting expensive labelled data, a model is deployed and user-model
interactions are recorded in a log. This log is used to improve a model in a counterfactual
off-policy learning setup. We first exemplify this approach on a domain adaptation task for
machine translation. Here, we show that counterfactual learning can be applied to tasks
with large output spaces and, in contrast to prevalent theory, deterministic logs can successfully be used on sequence-to-sequence tasks for NLP. Next, we demonstrate on a semantic parsing task that counterfactual learning can also be applied when the underlying
model is a neural network and feedback is collected from human users. Applying both approaches to the same semantic parsing task, allows us to draw a direct comparison between
them. Response-based on-policy learning outperforms counterfactual off-policy learning,
but requires expensive labelled data for the downstream task, whereas interaction logs for
counterfactual learning can be easier to obtain in various scenarios
- …