185,098 research outputs found
How to Fine-Tune BERT for Text Classification?
Language model pre-training has proven to be useful in learning universal
language representations. As a state-of-the-art language model pre-training
model, BERT (Bidirectional Encoder Representations from Transformers) has
achieved amazing results in many language understanding tasks. In this paper,
we conduct exhaustive experiments to investigate different fine-tuning methods
of BERT on text classification task and provide a general solution for BERT
fine-tuning. Finally, the proposed solution obtains new state-of-the-art
results on eight widely-studied text classification datasets
Universal Language Model Fine-tuning for Text Classification
Inductive transfer learning has greatly impacted computer vision, but
existing approaches in NLP still require task-specific modifications and
training from scratch. We propose Universal Language Model Fine-tuning
(ULMFiT), an effective transfer learning method that can be applied to any task
in NLP, and introduce techniques that are key for fine-tuning a language model.
Our method significantly outperforms the state-of-the-art on six text
classification tasks, reducing the error by 18-24% on the majority of datasets.
Furthermore, with only 100 labeled examples, it matches the performance of
training from scratch on 100x more data. We open-source our pretrained models
and code.Comment: ACL 2018, fixed denominator in Equation 3, line
Computation-Performance Optimization of Convolutional Neural Networks with Redundant Kernel Removal
Deep Convolutional Neural Networks (CNNs) are widely employed in modern
computer vision algorithms, where the input image is convolved iteratively by
many kernels to extract the knowledge behind it. However, with the depth of
convolutional layers getting deeper and deeper in recent years, the enormous
computational complexity makes it difficult to be deployed on embedded systems
with limited hardware resources. In this paper, we propose two
computation-performance optimization methods to reduce the redundant
convolution kernels of a CNN with performance and architecture constraints, and
apply it to a network for super resolution (SR). Using PSNR drop compared to
the original network as the performance criterion, our method can get the
optimal PSNR under a certain computation budget constraint. On the other hand,
our method is also capable of minimizing the computation required under a given
PSNR drop.Comment: This paper was accepted by 2018 The International Symposium on
Circuits and Systems (ISCAS
Supervised and Unsupervised Transfer Learning for Question Answering
Although transfer learning has been shown to be successful for tasks like
object and speech recognition, its applicability to question answering (QA) has
yet to be well-studied. In this paper, we conduct extensive experiments to
investigate the transferability of knowledge learned from a source QA dataset
to a target dataset using two QA models. The performance of both models on a
TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson
et al., 2013) is significantly improved via a simple transfer learning
technique from MovieQA (Tapaswi et al., 2016). In particular, one of the models
achieves the state-of-the-art on all target datasets; for the TOEFL listening
comprehension test, it outperforms the previous best model by 7%. Finally, we
show that transfer learning is helpful even in unsupervised scenarios when
correct answers for target QA dataset examples are not available.Comment: To appear in NAACL HLT 2018 (long paper
- …