The autoencoder is an unsupervised learning paradigm that aims to create a
compact latent representation of data by minimizing the reconstruction loss.
However, it tends to overlook the fact that most data (images) are embedded in
a lower-dimensional space, which is crucial for effective data representation.
To address this limitation, we propose a novel approach called Low-Rank
Autoencoder (LoRAE). In LoRAE, we incorporated a low-rank regularizer to
adaptively reconstruct a low-dimensional latent space while preserving the
basic objective of an autoencoder. This helps embed the data in a
lower-dimensional space while preserving important information. It is a simple
autoencoder extension that learns low-rank latent space. Theoretically, we
establish a tighter error bound for our model. Empirically, our model's
superiority shines through various tasks such as image generation and
downstream classification. Both theoretical and practical outcomes highlight
the importance of acquiring low-dimensional embeddings.Comment: Accepted @ IEEE/CVF WACV 202