Probabilistic models based on continuous latent spaces, such as variational
autoencoders, can be understood as uncountable mixture models where components
depend continuously on the latent code. They have proven expressive tools for
generative and probabilistic modelling, but are at odds with tractable
probabilistic inference, that is, computing marginals and conditionals of the
represented probability distribution. Meanwhile, tractable probabilistic models
such as probabilistic circuits (PCs) can be understood as hierarchical discrete
mixture models, which allows them to perform exact inference, but often they
show subpar performance in comparison to continuous latent-space models. In
this paper, we investigate a hybrid approach, namely continuous mixtures of
tractable models with a small latent dimension. While these models are
analytically intractable, they are well amenable to numerical integration
schemes based on a finite set of integration points. With a large enough number
of integration points the approximation becomes de-facto exact. Moreover, using
a finite set of integration points, the approximation method can be compiled
into a PC performing `exact inference in an approximate model'. In experiments,
we show that this simple scheme proves remarkably effective, as PCs learned
this way set new state-of-the-art for tractable models on many standard density
estimation benchmarks