6 research outputs found
LEVER: Learning to Verify Language-to-Code Generation with Execution
The advent of pre-trained code language models (CodeLMs) has lead to
significant progress in language-to-code generation. State-of-the-art
approaches in this area combine CodeLM decoding with sample pruning and
reranking using test cases or heuristics based on the execution results.
However, it is challenging to obtain test cases for many real-world
language-to-code applications, and heuristics cannot well capture the semantic
features of the execution results, such as data type and value range, which
often indicates the correctness of the program. In this work, we propose LEVER,
a simple approach to improve language-to-code generation by learning to verify
the generated programs with their execution results. Specifically, we train
verifiers to determine whether a program sampled from the CodeLM is correct or
not based on the natural language input, the program itself and its execution
results. The sampled programs are reranked by combining the verification score
with the CodeLM generation probability, and marginalizing over programs with
the same execution results. On four datasets across the domains of table QA,
math QA and basic Python programming, LEVER consistently improves over the base
CodeLMs (4.6% to 10.9% with code-davinci-002) and achieves new state-of-the-art
results on all of them.Comment: 23 page
Improving In-Context Few-Shot Learning via Self-Supervised Training
Self-supervised pretraining has made few-shot learning possible for many NLP
tasks. But the pretraining objectives are not typically adapted specifically
for in-context few-shot learning. In this paper, we propose to use
self-supervision in an intermediate training stage between pretraining and
downstream few-shot usage with the goal to teach the model to perform
in-context few shot learning. We propose and evaluate four self-supervised
objectives on two benchmarks. We find that the intermediate self-supervision
stage produces models that outperform strong baselines. Ablation study shows
that several factors affect the downstream performance, such as the amount of
training data and the diversity of the self-supervised objectives.
Human-annotated cross-task supervision and self-supervision are complementary.
Qualitative analysis suggests that the self-supervised-trained models are
better at following task requirements.Comment: NAACL 202
ToKen: Task Decomposition and Knowledge Infusion for Few-Shot Hate Speech Detection
Hate speech detection is complex; it relies on commonsense reasoning,
knowledge of stereotypes, and an understanding of social nuance that differs
from one culture to the next. It is also difficult to collect a large-scale
hate speech annotated dataset. In this work, we frame this problem as a
few-shot learning task, and show significant gains with decomposing the task
into its "constituent" parts. In addition, we see that infusing knowledge from
reasoning datasets (e.g. Atomic2020) improves the performance even further.
Moreover, we observe that the trained models generalize to out-of-distribution
datasets, showing the superiority of task decomposition and knowledge infusion
compared to previously used methods. Concretely, our method outperforms the
baseline by 17.83% absolute gain in the 16-shot case.Comment: Preprin