Bayesian nonparametric hierarchical priors provide flexible models for
sharing of information within and across groups. We focus on latent feature
allocation models, where the data structures correspond to multisets or
unbounded sparse matrices. The fundamental development in this regard is the
Hierarchical Indian Buffet process (HIBP), devised by Thibaux and Jordan
(2007). However, little is known in terms of explicit tractable descriptions of
the joint, marginal, posterior and predictive distributions of the HIBP. We
provide explicit novel descriptions of these quantities, in the Bernoulli HIBP
and general spike and slab HIBP settings, which allows for exact sampling and
simpler practical implementation. We then extend these results to the more
complex setting of hierarchies of general HIBP (HHIBP). The generality of our
framework allows one to recognize important structure that may otherwise be
masked in the Bernoulli setting, and involves characterizations via dynamic
mixed Poisson random count matrices. Our analysis shows that the standard
choice of hierarchical Beta processes for modeling across group sharing is not
ideal in the classic Bernoulli HIBP setting proposed by Thibaux and Jordan
(2007), or other spike and slab HIBP settings, and we thus indicate tractable
alternative priors.Comment: This is an extensive re-write and extension of arXiv:2103.11407 where
variations of the results for the HIBP (but not HHIBP) were establishe