459 research outputs found
Scheduled Denoising Autoencoders
We present a representation learning method that learns features at multiple dif-ferent levels of scale. Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during train-ing, the network tends to learn coarse-grained features, whereas when the input is only slightly corrupted during training, the network tends to learn fine-grained features. This motivates the scheduled denoising autoencoder, which starts with a high level of input noise that lowers as training progresses. We find that the result-ing representation yields a significant boost on a later supervised task compared to the original input, or to a standard denoising autoencoder trained at a single noise level.
Recommended from our members
Deep Learning for Predicting Non-attendance in Hospital Outpatient Appointments
The hospital outpatient non-attendance imposes huge financial burden on hospitals every year. The non- attendance issue roots in multiple diverse reasons which makes the problem space particularly complicated and undiscovered. The aim of this research is to build an advanced predictive model for non-attendance considering whole spectrum of factors and their complexities from big hospital data. We proposed a novel non-attendance prediction model based on deep neural networks. The proposed method is based on sparse stacked denoising autoencoders (SSDAEs). Different with exiting deep learning applications in hospital data which have separated data reconstruction and prediction phases, our model integrated both phases aiming to have higher performance than divided- classification model in predicting tasks from EPR. The proposed method is compared with some well-known machine learning classifiers and representative research works for non-attendance prediction. The evaluation results reveal that the proposed deep approach drastically outperforms other methods in practice
LoMAE: Low-level Vision Masked Autoencoders for Low-dose CT Denoising
Low-dose computed tomography (LDCT) offers reduced X-ray radiation exposure
but at the cost of compromised image quality, characterized by increased noise
and artifacts. Recently, transformer models emerged as a promising avenue to
enhance LDCT image quality. However, the success of such models relies on a
large amount of paired noisy and clean images, which are often scarce in
clinical settings. In the fields of computer vision and natural language
processing, masked autoencoders (MAE) have been recognized as an effective
label-free self-pretraining method for transformers, due to their exceptional
feature representation ability. However, the original pretraining and
fine-tuning design fails to work in low-level vision tasks like denoising. In
response to this challenge, we redesign the classical encoder-decoder learning
model and facilitate a simple yet effective low-level vision MAE, referred to
as LoMAE, tailored to address the LDCT denoising problem. Moreover, we
introduce an MAE-GradCAM method to shed light on the latent learning mechanisms
of the MAE/LoMAE. Additionally, we explore the LoMAE's robustness and
generability across a variety of noise levels. Experiments results show that
the proposed LoMAE can enhance the transformer's denoising performance and
greatly relieve the dependence on the ground truth clean data. It also
demonstrates remarkable robustness and generalizability over a spectrum of
noise levels
- …