305,957 research outputs found
Functional Mixed Membership Models
Mixed membership models, or partial membership models, are a flexible
unsupervised learning method that allows each observation to belong to multiple
clusters. In this paper, we propose a Bayesian mixed membership model for
functional data. By using the multivariate Karhunen-Lo\`eve theorem, we are
able to derive a scalable representation of Gaussian processes that maintains
data-driven learning of the covariance structure. Within this framework, we
establish conditional posterior consistency given a known feature allocation
matrix. Compared to previous work on mixed membership models, our proposal
allows for increased modeling flexibility, with the benefit of a directly
interpretable mean and covariance structure. Our work is motivated by studies
in functional brain imaging through electroencephalography (EEG) of children
with autism spectrum disorder (ASD). In this context, our work formalizes the
clinical notion of "spectrum" in terms of feature membership proportions.Comment: 77 pages, 16 figure
A Tensor Approach to Learning Mixed Membership Community Models
Community detection is the task of detecting hidden communities from observed
interactions. Guaranteed community detection has so far been mostly limited to
models with non-overlapping communities such as the stochastic block model. In
this paper, we remove this restriction, and provide guaranteed community
detection for a family of probabilistic network models with overlapping
communities, termed as the mixed membership Dirichlet model, first introduced
by Airoldi et al. This model allows for nodes to have fractional memberships in
multiple communities and assumes that the community memberships are drawn from
a Dirichlet distribution. Moreover, it contains the stochastic block model as a
special case. We propose a unified approach to learning these models via a
tensor spectral decomposition method. Our estimator is based on low-order
moment tensor of the observed network, consisting of 3-star counts. Our
learning method is fast and is based on simple linear algebraic operations,
e.g. singular value decomposition and tensor power iterations. We provide
guaranteed recovery of community memberships and model parameters and present a
careful finite sample analysis of our learning method. As an important special
case, our results match the best known scaling requirements for the
(homogeneous) stochastic block model
Mixed membership stochastic blockmodels
Observations consisting of measurements on relationships for pairs of objects
arise in many settings, such as protein interaction and gene regulatory
networks, collections of author-recipient email, and social networks. Analyzing
such data with probabilisic models can be delicate because the simple
exchangeability assumptions underlying many boilerplate models no longer hold.
In this paper, we describe a latent variable model of such data called the
mixed membership stochastic blockmodel. This model extends blockmodels for
relational data to ones which capture mixed membership latent relational
structure, thus providing an object-specific low-dimensional representation. We
develop a general variational inference algorithm for fast approximate
posterior inference. We explore applications to social and protein interaction
networks.Comment: 46 pages, 14 figures, 3 table
- …