3,407 research outputs found
Parsing as Pretraining
[Abstract] Recent analyses suggest that encoders pretrained for language
modeling capture certain morpho-syntactic structure.
However, probing frameworks for word vectors still do not report
results on standard setups such as constituent and dependency
parsing. This paper addresses this problem and does
full parsing (on English) relying only on pretraining architectures
– and no decoding. We first cast constituent and dependency
parsing as sequence tagging. We then use a single
feed-forward layer to directly map word vectors to labels that
encode a linearized tree. This is used to: (i) see how far we can
reach on syntax modelling with just pretrained encoders, and
(ii) shed some light about the syntax-sensitivity of different
word vectors (by freezing the weights of the pretraining network
during training). For evaluation, we use bracketing F1-score and LAS, and analyze in-depth differences across representations for span lengths and dependency displacements. The overall results surpass existing sequence tagging parsers on the PTB (93.5%) and end-to-end EN-EWT UD (78.8%).We thank Mark Anderson and Daniel Hershcovich for their comments. DV, MS and CGR are funded by the ERC under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant No 714150), by the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and by Xunta de Galicia (ED431B 2017/01). AS is funded by a Google Focused Research AwardXunta de Galicia; ED431B 2017/0
Neural Word Segmentation with Rich Pretraining
Neural word segmentation research has benefited from large-scale raw texts by
leveraging them for pretraining character and word embeddings. On the other
hand, statistical segmentation research has exploited richer sources of
external information, such as punctuation, automatic segmentation and POS. We
investigate the effectiveness of a range of external training sources for
neural word segmentation by building a modular segmentation model, pretraining
the most important submodule using rich external sources. Results show that
such pretraining significantly improves the model, leading to accuracies
competitive to the best methods on six benchmarks.Comment: Accepted by ACL 201
Learning with Latent Language
The named concepts and compositional operators present in natural language
provide a rich source of information about the kinds of abstractions humans use
to navigate the world. Can this linguistic background knowledge improve the
generality and efficiency of learned classifiers and control policies? This
paper aims to show that using the space of natural language strings as a
parameter space is an effective way to capture natural task structure. In a
pretraining phase, we learn a language interpretation model that transforms
inputs (e.g. images) into outputs (e.g. labels) given natural language
descriptions. To learn a new concept (e.g. a classifier), we search directly in
the space of descriptions to minimize the interpreter's loss on training
examples. Crucially, our models do not require language data to learn these
concepts: language is used only in pretraining to impose structure on
subsequent learning. Results on image classification, text editing, and
reinforcement learning show that, in all settings, models with a linguistic
parameterization outperform those without
Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding
Visually-situated language is ubiquitous -- sources range from textbooks with
diagrams to web pages with images and tables, to mobile apps with buttons and
forms. Perhaps due to this diversity, previous work has typically relied on
domain-specific recipes with limited sharing of the underlying data, model
architectures, and objectives. We present Pix2Struct, a pretrained
image-to-text model for purely visual language understanding, which can be
finetuned on tasks containing visually-situated language. Pix2Struct is
pretrained by learning to parse masked screenshots of web pages into simplified
HTML. The web, with its richness of visual elements cleanly reflected in the
HTML structure, provides a large source of pretraining data well suited to the
diversity of downstream tasks. Intuitively, this objective subsumes common
pretraining signals such as OCR, language modeling, image captioning. In
addition to the novel pretraining strategy, we introduce a variable-resolution
input representation and a more flexible integration of language and vision
inputs, where language prompts such as questions are rendered directly on top
of the input image. For the first time, we show that a single pretrained model
can achieve state-of-the-art results in six out of nine tasks across four
domains: documents, illustrations, user interfaces, and natural images
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT
Pretrained contextual representation models (Peters et al., 2018; Devlin et
al., 2018) have pushed forward the state-of-the-art on many NLP tasks. A new
release of BERT (Devlin, 2018) includes a model simultaneously pretrained on
104 languages with impressive performance for zero-shot cross-lingual transfer
on a natural language inference task. This paper explores the broader
cross-lingual potential of mBERT (multilingual) as a zero shot language
transfer model on 5 NLP tasks covering a total of 39 languages from various
language families: NLI, document classification, NER, POS tagging, and
dependency parsing. We compare mBERT with the best-published methods for
zero-shot cross-lingual transfer and find mBERT competitive on each task.
Additionally, we investigate the most effective strategy for utilizing mBERT in
this manner, determine to what extent mBERT generalizes away from language
specific features, and measure factors that influence cross-lingual transfer.Comment: EMNLP 2019 Camera Read
- …