Sum-product networks (SPN) are graphical models capable of handling large amount of multi-
dimensional data. Unlike many other graphical models, SPNs are tractable if certain structural
requirements are fulfilled; a model is called tractable if probabilistic inference can be performed in
a polynomial time with respect to the size of the model. The learning of SPNs can be separated
into two modes, parameter and structure learning. Many earlier approaches to SPN learning have
treated the two modes as separate, but it has been found that by alternating between these two
modes, good results can be achieved. One example of this kind of algorithm was presented by
Trapp et al. in an article Bayesian Learning of Sum-Product Networks (NeurIPS, 2019).
This thesis discusses SPNs and a Bayesian learning algorithm developed based on the earlier men-
tioned algorithm, differing in some of the used methods. The algorithm by Trapp et al. uses Gibbs
sampling in the parameter learning phase, whereas here Metropolis-Hasting MCMC is used. The
algorithm developed for this thesis was used in two experiments, with a small and simple SPN and
with a larger and more complex SPN. Also, the effect of the data set size and the complexity of
the data was explored. The results were compared to the results got from running the original
algorithm developed by Trapp et al.
The results show that having more data in the learning phase makes the results more accurate as
it is easier for the model to spot patterns from a larger set of data. It was also shown that the
model was able to learn the parameters in the experiments if the data were simple enough, in other
words, if the dimensions of the data contained only one distribution per dimension. In the case
of more complex data, where there were multiple distributions per dimension, the struggle of the
computation was seen from the results