1 research outputs found
Maximum Entropy Discrimination Factor Analyzers
Devising generative models that allow for inferring low dimensional latent fea-
ture representations of high-dimensional observations is a significant problem
in statistical machine learning. Factor analysis (FA) is a well-established lin-
ear latent variable scheme addressing this problem by modeling the covariances
between the elements of multivariate observations under a set of linear assump-
tions. FA is closely related to principal components analysis (PCA), and might
be considered as a generalization of both PCA and its probabilistic version,
PPCA. Recently, the invention of Gaussian process latent variable models (GP-
LVMs) has given rise to a whole new family of latent variable modeling schemes
that generalize FA under a nonparametric Bayesian inference framework. In
this work, we examine generalization of FA models under a different Bayesian
inference perspective. Specifically, we propose a large-margin formulation of
FA under the maximum entropy discrimination (MED) framework. The MED
framework integrates the large-margin principle with Bayesian posterior infer-
ence in an elegant and computationally efficient fashion, allowing to leverage
existing high-performance solvers for convex optimization problems. We devise
efficient mean-field inference algorithms for our model, and exhibit its advan-
tages by evaluating it in a number of diverse application scenarios, dealing with
high-dimensional data classification and reconstruction