Recently, the topic of table pre-training has attracted considerable research
interest. However, how to employ table pre-training to boost the performance of
tabular prediction remains an open challenge. In this paper, we propose TapTap,
the first attempt that leverages table pre-training to empower models for
tabular prediction. After pre-training on a large corpus of real-world tabular
data, TapTap can generate high-quality synthetic tables to support various
applications on tabular data, including privacy protection, low resource
regime, missing value imputation, and imbalanced classification. Extensive
experiments on 12 datasets demonstrate that TapTap outperforms a total of 16
baselines in different scenarios. Meanwhile, it can be easily combined with
various backbone models, including LightGBM, Multilayer Perceptron (MLP) and
Transformer. Moreover, with the aid of table pre-training, models trained using
synthetic data generated by TapTap can even compete with models using the
original dataset on half of the experimental datasets, marking a milestone in
the development of synthetic tabular data generation. The codes are available
at https://github.com/ZhangTP1996/TapTap