2 research outputs found
StrAE: Autoencoding for Pre-Trained Embeddings using Explicit Structure
This work presents StrAE: a Structured Autoencoder framework that through
strict adherence to explicit structure, and use of a novel contrastive
objective over tree-structured representations, enables effective learning of
multi-level representations. Through comparison over different forms of
structure, we verify that our results are directly attributable to the
informativeness of the structure provided as input, and show that this is not
the case for existing tree models. We then further extend StrAE to allow the
model to define its own compositions using a simple localised-merge algorithm.
This variant, called Self-StrAE, outperforms baselines that don't involve
explicit hierarchical compositions, and is comparable to models given
informative structure (e.g. constituency parses). Our experiments are conducted
in a data-constrained (circa 10M tokens) setting to help tease apart the
contribution of the inductive bias to effective learning. However, we find that
this framework can be robust to scale, and when extended to a much larger
dataset (circa 100M tokens), our 430 parameter model performs comparably to a
6-layer RoBERTa many orders of magnitude larger in size. Our findings support
the utility of incorporating explicit composition as an inductive bias for
effective representation learning.Comment: EMNLP 2023 Mai
Recommended from our members
Starting Small, After All? Curriculum Learning with Child-Directed Speech
The idea of curriculum learning, whereby a model is first exposed to simpler examples before an increase in complexity, has long fascinated the AI community. Unfortunately, the experimental successes of curriculum learning have been mixed, particularly applied to natural language, where a vast body of literature appears to evidence its failures. However, recent work has shown that language models trained on transcribed-child-directed-speech (CDS) learn more grammar compared to those trained on Wikipedia. To a lesser extent, the same trend has been observed through training on transcribed speech and simple text data. Motivated by these findings, we revisit the idea of curriculum learning starting from CDS, before moving to simple data, and finally finishing with complex long form text. Unfortunately, through experimentation with an array of models and training step sizes, only in the smallest models trained for the least steps does curriculum learning show any advantage over random sampling