3 research outputs found
Using BibTeX to Automatically Generate Labeled Data for Citation Field Extraction
Accurate parsing of citation reference strings is crucial to automatically
construct scholarly databases such as Google Scholar or Semantic Scholar.
Citation field extraction (CFE) is precisely this task---given a reference
label which tokens refer to the authors, venue, title, editor, journal, pages,
etc. Most methods for CFE are supervised and rely on training from labeled
datasets that are quite small compared to the great variety of reference
formats. BibTeX, the widely used reference management tool, provides a natural
method to automatically generate and label training data for CFE. In this
paper, we describe a technique for using BibTeX to generate, automatically, a
large-scale 41M labeled strings), labeled dataset, that is four orders of
magnitude larger than the current largest CFE dataset, namely the UMass
Citation Field Extraction dataset [Anzaroot and McCallum, 2013]. We
experimentally demonstrate how our dataset can be used to improve the
performance of the UMass CFE using a RoBERTa-based [Liu et al., 2019] model. In
comparison to previous SoTA, we achieve a 24.48% relative error reduction,
achieving span level F1-scores of 96.3%
Sample-efficient Linguistic Generalizations through Program Synthesis: Experiments with Phonology Problems
Neural models excel at extracting statistical patterns from large amounts of
data, but struggle to learn patterns or reason about language from only a few
examples. In this paper, we ask: Can we learn explicit rules that generalize
well from only a few examples? We explore this question using program
synthesis. We develop a synthesis model to learn phonology rules as programs in
a domain-specific language. We test the ability of our models to generalize
from few training examples using our new dataset of problems from the
Linguistics Olympiad, a challenging set of tasks that require strong linguistic
reasoning ability. In addition to being highly sample-efficient, our approach
generates human-readable programs, and allows control over the generalizability
of the learnt programs.Comment: SIGMORPHON 202
Programming by Rewards
We formalize and study ``programming by rewards'' (PBR), a new approach for
specifying and synthesizing subroutines for optimizing some quantitative metric
such as performance, resource utilization, or correctness over a benchmark. A
PBR specification consists of (1) input features , and (2) a reward function
, modeled as a black-box component (which we can only run), that assigns a
reward for each execution. The goal of the synthesizer is to synthesize a
"decision function" which transforms the features to a decision value for
the black-box component so as to maximize the expected reward for executing decisions for various values of . We consider a
space of decision functions in a DSL of loop-free if-then-else programs, which
can branch on linear functions of the input features in a tree-structure and
compute a linear function of the inputs in the leaves of the tree. We find that
this DSL captures decision functions that are manually written in practice by
programmers. Our technical contribution is the use of continuous-optimization
techniques to perform synthesis of such decision functions as if-then-else
programs. We also show that the framework is theoretically-founded ---in cases
when the rewards satisfy nice properties, the synthesized code is optimal in a
precise sense.
We have leveraged PBR to synthesize non-trivial decision functions related to
search and ranking heuristics in the PROSE codebase (an industrial strength
program synthesis framework) and achieve competitive results to manually
written procedures over multiple man years of tuning. We present empirical
evaluation against other baseline techniques over real-world case studies
(including PROSE) as well on simple synthetic benchmarks