Recently, images are considered samples from a high-dimensional distribution,
and deep learning has become almost synonymous with image generation. However,
is a deep learning network truly necessary for image generation? In this paper,
we investigate the possibility of image generation without using a deep
learning network, motivated by validating the assumption that images follow a
high-dimensional distribution. Since images are assumed to be samples from such
a distribution, we utilize the Gaussian Mixture Model (GMM) to describe it. In
particular, we employ a recent distribution learning technique named as
Monte-Carlo Marginalization to capture the parameters of the GMM based on image
samples. Moreover, we also use the Singular Value Decomposition (SVD) for
dimensionality reduction to decrease computational complexity. During our
evaluation experiment, we first attempt to model the distribution of image
samples directly to verify the assumption that images truly follow a
distribution. We then use the SVD for dimensionality reduction. The principal
components, rather than raw image data, are used for distribution learning.
Compared to methods relying on deep learning networks, our approach is more
explainable, and its performance is promising. Experiments show that our images
have a lower FID value compared to those generated by variational
auto-encoders, demonstrating the feasibility of image generation without deep
learning networks.Comment: This paper has been reject. I am planning to combine this paper with
my another paper to make one strong pape