75 research outputs found
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
This paper presents a systematic overview and comparison of
parameter-efficient fine-tuning methods covering over 40 papers published
between February 2019 and February 2023. These methods aim to resolve the
infeasibility and impracticality of fine-tuning large language models by only
training a small set of parameters. We provide a taxonomy that covers a broad
range of methods and present a detailed method comparison with a specific focus
on real-life efficiency and fine-tuning multibillion-scale language models
Mapping the History of Knowledge: Text-Based Tools and Algorithms for Tracking the Development of Concepts
We propose to map out the History of European thought over last three centuries using as a proxy the history of changes in 15 editions of Encyclopedia Britannica. Editors of each new edition had to build a new consensus on what to include and what to exclude, how much volume a subject deserves, and what are the relations between subjects. These decisions may be captured and analyzed by methods of natural language processing, network analysis, and information visualization, thus providing tools for identification and analysis of various historical trends within and across domains of knowledge, such as discussion of theories and ideas, evolution of concepts, growth of reputations and such
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Language model probing is often used to test specific capabilities of these
models. However, conclusions from such studies may be limited when the probing
benchmarks are small and lack statistical power. In this work, we introduce
new, larger datasets for negation (NEG-1500-SIMP) and role reversal (ROLE-1500)
inspired by psycholinguistic studies. We dramatically extend existing NEG-136
and ROLE-88 benchmarks using GPT3, increasing their size from 18 and 44
sentence pairs to 750 each. We also create another version of extended negation
dataset (NEG-1500-SIMP-TEMP), created using template-based generation. It
consists of 770 sentence pairs. We evaluate 22 models on the extended datasets,
seeing model performance dip 20-57% compared to the original smaller
benchmarks. We observe high levels of negation sensitivity in models like BERT
and ALBERT demonstrating that previous findings might have been skewed due to
smaller test sets. Finally, we observe that while GPT3 has generated all the
examples in ROLE-1500 is only able to solve 24.6% of them during probing
Revealing the Dark Secrets of BERT
BERT-based architectures currently give state-of-the-art performance on many
NLP tasks, but little is known about the exact mechanisms that contribute to
its success. In the current work, we focus on the interpretation of
self-attention, which is one of the fundamental underlying components of BERT.
Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we
propose the methodology and carry out a qualitative and quantitative analysis
of the information encoded by the individual BERT's heads. Our findings suggest
that there is a limited set of attention patterns that are repeated across
different heads, indicating the overall model overparametrization. While
different heads consistently use the same attention patterns, they have varying
impact on performance across different tasks. We show that manually disabling
attention in certain heads leads to a performance improvement over the regular
fine-tuned BERT models.Comment: Accepted to EMNLP 201
- …