Scheduled Denoising Autoencoders

Abstract

We present a representation learning method that learns features at multiple dif-ferent levels of scale. Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during train-ing, the network tends to learn coarse-grained features, whereas when the input is only slightly corrupted during training, the network tends to learn fine-grained features. This motivates the scheduled denoising autoencoder, which starts with a high level of input noise that lowers as training progresses. We find that the result-ing representation yields a significant boost on a later supervised task compared to the original input, or to a standard denoising autoencoder trained at a single noise level.

    Similar works