Bayesian brain theory suggests that the brain employs generative models to
understand the external world. The sampling-based perspective posits that the
brain infers the posterior distribution through samples of stochastic neuronal
responses. Additionally, the brain continually updates its generative model to
approach the true distribution of the external world. In this study, we
introduce the Hierarchical Exponential-family Energy-based (HEE) model, which
captures the dynamics of inference and learning. In the HEE model, we decompose
the partition function into individual layers and leverage a group of neurons
with shorter time constants to sample the gradient of the decomposed
normalization term. This allows our model to estimate the partition function
and perform inference simultaneously, circumventing the negative phase
encountered in conventional energy-based models (EBMs). As a result, the
learning process is localized both in time and space, and the model is easy to
converge. To match the brain's rapid computation, we demonstrate that neural
adaptation can serve as a momentum term, significantly accelerating the
inference process. On natural image datasets, our model exhibits
representations akin to those observed in the biological visual system.
Furthermore, for the machine learning community, our model can generate
observations through joint or marginal generation. We show that marginal
generation outperforms joint generation and achieves performance on par with
other EBMs.Comment: NeurIPS 202