This paper begins with a description of methods for estimating probability
density functions for images that reflects the observation that such data is
usually constrained to lie in restricted regions of the high-dimensional image
space - not every pattern of pixels is an image. It is common to say that
images lie on a lower-dimensional manifold in the high-dimensional space.
However, although images may lie on such lower-dimensional manifolds, it is not
the case that all points on the manifold have an equal probability of being
images. Images are unevenly distributed on the manifold, and our task is to
devise ways to model this distribution as a probability distribution. In
pursuing this goal, we consider generative models that are popular in AI and
computer vision community. For our purposes, generative/probabilistic models
should have the properties of 1) sample generation: it should be possible to
sample from this distribution according to the modelled density function, and
2) probability computation: given a previously unseen sample from the dataset
of interest, one should be able to compute the probability of the sample, at
least up to a normalising constant. To this end, we investigate the use of
methods such as normalising flow and diffusion models. We then show that such
probabilistic descriptions can be used to construct defences against
adversarial attacks. In addition to describing the manifold in terms of
density, we also consider how semantic interpretations can be used to describe
points on the manifold. To this end, we consider an emergent language framework
which makes use of variational encoders to produce a disentangled
representation of points that reside on a given manifold. Trajectories between
points on a manifold can then be described in terms of evolving semantic
descriptions.Comment: 23 pages, 17 figures, 1 tabl