27,029 research outputs found
An Empirical Study of Stochastic Variational Algorithms for the Beta Bernoulli Process
Stochastic variational inference (SVI) is emerging as the most promising
candidate for scaling inference in Bayesian probabilistic models to large
datasets. However, the performance of these methods has been assessed primarily
in the context of Bayesian topic models, particularly latent Dirichlet
allocation (LDA). Deriving several new algorithms, and using synthetic, image
and genomic datasets, we investigate whether the understanding gleaned from LDA
applies in the setting of sparse latent factor models, specifically beta
process factor analysis (BPFA). We demonstrate that the big picture is
consistent: using Gibbs sampling within SVI to maintain certain posterior
dependencies is extremely effective. However, we find that different posterior
dependencies are important in BPFA relative to LDA. Particularly,
approximations able to model intra-local variable dependence perform best.Comment: ICML, 12 pages. Volume 37: Proceedings of The 32nd International
Conference on Machine Learning, 201
Consistent Dynamic Mode Decomposition
We propose a new method for computing Dynamic Mode Decomposition (DMD)
evolution matrices, which we use to analyze dynamical systems. Unlike the
majority of existing methods, our approach is based on a variational
formulation consisting of data alignment penalty terms and constitutive
orthogonality constraints. Our method does not make any assumptions on the
structure of the data or their size, and thus it is applicable to a wide range
of problems including non-linear scenarios or extremely small observation sets.
In addition, our technique is robust to noise that is independent of the
dynamics and it does not require input data to be sequential. Our key idea is
to introduce a regularization term for the forward and backward dynamics. The
obtained minimization problem is solved efficiently using the Alternating
Method of Multipliers (ADMM) which requires two Sylvester equation solves per
iteration. Our numerical scheme converges empirically and is similar to a
provably convergent ADMM scheme. We compare our approach to various
state-of-the-art methods on several benchmark dynamical systems
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
This article presents a new search algorithm for the NP-hard problem of
optimizing functions of binary variables that decompose according to a
graphical model. It can be applied to models of any order and structure. The
main novelty is a technique to constrain the search space based on the topology
of the model. When pursued to the full search depth, the algorithm is
guaranteed to converge to a global optimum, passing through a series of
monotonously improving local optima that are guaranteed to be optimal within a
given and increasing Hamming distance. For a search depth of 1, it specializes
to Iterated Conditional Modes. Between these extremes, a useful tradeoff
between approximation quality and runtime is established. Experiments on models
derived from both illustrative and real problems show that approximations found
with limited search depth match or improve those obtained by state-of-the-art
methods based on message passing and linear programming.Comment: C++ Source Code available from
http://hci.iwr.uni-heidelberg.de/software.ph
Pairwise Quantization
We consider the task of lossy compression of high-dimensional vectors through
quantization. We propose the approach that learns quantization parameters by
minimizing the distortion of scalar products and squared distances between
pairs of points. This is in contrast to previous works that obtain these
parameters through the minimization of the reconstruction error of individual
points. The proposed approach proceeds by finding a linear transformation of
the data that effectively reduces the minimization of the pairwise distortions
to the minimization of individual reconstruction errors. After such
transformation, any of the previously-proposed quantization approaches can be
used. Despite the simplicity of this transformation, the experiments
demonstrate that it achieves considerable reduction of the pairwise distortions
compared to applying quantization directly to the untransformed data
Discrete spherical means of directional derivatives and Veronese maps
We describe and study geometric properties of discrete circular and spherical
means of directional derivatives of functions, as well as discrete
approximations of higher order differential operators. For an arbitrary
dimension we present a general construction for obtaining discrete spherical
means of directional derivatives. The construction is based on using the
Minkowski's existence theorem and Veronese maps. Approximating the directional
derivatives by appropriate finite differences allows one to obtain finite
difference operators with good rotation invariance properties. In particular,
we use discrete circular and spherical means to derive discrete approximations
of various linear and nonlinear first- and second-order differential operators,
including discrete Laplacians. A practical potential of our approach is
demonstrated by considering applications to nonlinear filtering of digital
images and surface curvature estimation
- …