15,253 research outputs found
Revisiting Pre-Trained Models for Chinese Natural Language Processing
Bidirectional Encoder Representations from Transformers (BERT) has shown
marvelous improvements across various NLP tasks, and consecutive variants have
been proposed to further improve the performance of the pre-trained language
models. In this paper, we target on revisiting Chinese pre-trained language
models to examine their effectiveness in a non-English language and release the
Chinese pre-trained language model series to the community. We also propose a
simple but effective model called MacBERT, which improves upon RoBERTa in
several ways, especially the masking strategy that adopts MLM as correction
(Mac). We carried out extensive experiments on eight Chinese NLP tasks to
revisit the existing pre-trained language models as well as the proposed
MacBERT. Experimental results show that MacBERT could achieve state-of-the-art
performances on many NLP tasks, and we also ablate details with several
findings that may help future research. Resources available:
https://github.com/ymcui/MacBERTComment: 12 pages, to appear at Findings of EMNLP 202
Efficiently Robustify Pre-trained Models
A recent trend in deep learning algorithms has been towards training large
scale models, having high parameter count and trained on big dataset. However,
robustness of such large scale models towards real-world settings is still a
less-explored topic. In this work, we first benchmark the performance of these
models under different perturbations and datasets thereby representing
real-world shifts, and highlight their degrading performance under these
shifts. We then discuss on how complete model fine-tuning based existing
robustification schemes might not be a scalable option given very large scale
networks and can also lead them to forget some of the desired characterstics.
Finally, we propose a simple and cost-effective method to solve this problem,
inspired by knowledge transfer literature. It involves robustifying smaller
models, at a lower computation cost, and then use them as teachers to tune a
fraction of these large scale networks, reducing the overall computational
overhead. We evaluate our proposed method under various vision perturbations
including ImageNet-C,R,S,A datasets and also for transfer learning, zero-shot
evaluation setups on different datasets. Benchmark results show that our method
is able to induce robustness to these large scale models efficiently, requiring
significantly lower time and also preserves the transfer learning, zero-shot
properties of the original model which none of the existing methods are able to
achieve
Pre-trained Models for Sonar Images
Machine learning and neural networks are now ubiquitous in sonar perception, but it lags behind the computer vision field due to the lack of data and pre-trained models specifically for sonar images. In this paper we present the Marine Debris Turntable dataset and produce pre-trained neural networks trained on this dataset, meant to fill the gap of missing pre-trained models for sonar images. We train Resnet 20, MobileNets, DenseNet121, SqueezeNet, MiniXception, and an Autoencoder, over several input image sizes, from 32 x 32 to 96 x 96, on the Marine Debris turntable dataset. We evaluate these models using transfer learning for low-shot classification in the Marine Debris Watertank and another dataset captured using a Gemini 720i sonar. Our results show that in both datasets the pre-trained models produce good features that allow good classification accuracy with low samples (10-30 samples per class). The Gemini dataset validates that the features transfer to other kinds of sonar sensors. We expect that the community benefits from the public release of our pre-trained models and the turntable dataset
Efficient Speech Translation with Pre-trained Models
When building state-of-the-art speech translation models, the need for large computational resources is a significant obstacle due to the large training data size and complex models. The availability of pre-trained models is a promising opportunity to build strong speech translation systems efficiently. In a first step, we investigate efficient strategies to build cascaded and end-to-end speech translation systems based on pre-trained models. Using this strategy, we can train and apply the models on a single GPU. While the end-to-end models show superior translation performance to cascaded ones, the application of this technology has a limitation on the need for additional end-to-end training data. In a second step, we proposed an additional similarity loss to encourage the model to generate similar hidden representations for speech and transcript. Using this technique, we can increase the data efficiency and improve the translation quality by 6 BLEU points in scenarios with limited end-to-end training data
A Systematic Survey of Chemical Pre-trained Models
Deep learning has achieved remarkable success in learning representations for
molecules, which is crucial for various biochemical applications, ranging from
property prediction to drug design. However, training Deep Neural Networks
(DNNs) from scratch often requires abundant labeled molecules, which are
expensive to acquire in the real world. To alleviate this issue, tremendous
efforts have been devoted to Molecular Pre-trained Models (CPMs), where DNNs
are pre-trained using large-scale unlabeled molecular databases and then
fine-tuned over specific downstream tasks. Despite the prosperity, there lacks
a systematic review of this fast-growing field. In this paper, we present the
first survey that summarizes the current progress of CPMs. We first highlight
the limitations of training molecular representation models from scratch to
motivate CPM studies. Next, we systematically review recent advances on this
topic from several key perspectives, including molecular descriptors, encoder
architectures, pre-training strategies, and applications. We also highlight the
challenges and promising avenues for future research, providing a useful
resource for both machine learning and scientific communities.Comment: IJCAI 2023, Survey Trac
Modularized Zero-shot VQA with Pre-trained Models
Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In
this paper, we study how to leverage them for zero-shot visual question
answering (VQA). Our approach is motivated by a few observations. First, VQA
questions often require multiple steps of reasoning, which is still a
capability that most PTMs lack. Second, different steps in VQA reasoning chains
require different skills such as object detection and relational reasoning, but
a single PTM may not possess all these skills. Third, recent work on zero-shot
VQA does not explicitly consider multi-step reasoning chains, which makes them
less interpretable compared with a decomposition-based approach. We propose a
modularized zero-shot network that explicitly decomposes questions into sub
reasoning steps and is highly interpretable. We convert sub reasoning tasks to
acceptable objectives of PTMs and assign tasks to proper PTMs without any
adaptation. Our experiments on two VQA benchmarks under the zero-shot setting
demonstrate the effectiveness of our method and better interpretability
compared with several baselines.Comment: accepted as Findings in ACL 202
- …