Deep neural networks (DNNs) lack the precise semantics and definitive
probabilistic interpretation of probabilistic graphical models (PGMs). In this
paper, we propose an innovative solution by constructing infinite
tree-structured PGMs that correspond exactly to neural networks. Our research
reveals that DNNs, during forward propagation, indeed perform approximations of
PGM inference that are precise in this alternative PGM structure. Not only does
our research complement existing studies that describe neural networks as
kernel machines or infinite-sized Gaussian processes, it also elucidates a more
direct approximation that DNNs make to exact inference in PGMs. Potential
benefits include improved pedagogy and interpretation of DNNs, and algorithms
that can merge the strengths of PGMs and DNNs