Learning generative probabilistic models is a core problem in machine
learning, which presents significant challenges due to the curse of
dimensionality. This paper proposes a joint dimensionality reduction and
non-parametric density estimation framework, using a novel estimator that can
explicitly capture the underlying distribution of appropriate reduced-dimension
representations of the input data. The idea is to jointly design a nonlinear
dimensionality reducing auto-encoder to model the training data in terms of a
parsimonious set of latent random variables, and learn a canonical low-rank
tensor model of the joint distribution of the latent variables in the Fourier
domain. The proposed latent density model is non-parametric and universal, as
opposed to the predefined prior that is assumed in variational auto-encoders.
Joint optimization of the auto-encoder and the latent density estimator is
pursued via a formulation which learns both by minimizing a combination of the
negative log-likelihood in the latent domain and the auto-encoder
reconstruction loss. We demonstrate that the proposed model achieves very
promising results on toy, tabular, and image datasets on regression tasks,
sampling, and anomaly detection