14,973 research outputs found
Scalable Data Augmentation for Deep Learning
Scalable Data Augmentation (SDA) provides a framework for training deep
learning models using auxiliary hidden layers. Scalable MCMC is available for
network training and inference. SDA provides a number of computational
advantages over traditional algorithms, such as avoiding backtracking, local
modes and can perform optimization with stochastic gradient descent (SGD) in
TensorFlow. Standard deep neural networks with logit, ReLU and SVM activation
functions are straightforward to implement. To illustrate our architectures and
methodology, we use P\'{o}lya-Gamma logit data augmentation for a number of
standard datasets. Finally, we conclude with directions for future research
- …