13 research outputs found
On largest volume simplices and sub-determinants
We show that the problem of finding the simplex of largest volume in the
convex hull of points in can be approximated with a factor
of in polynomial time. This improves upon the previously best
known approximation guarantee of by Khachiyan. On the other hand,
we show that there exists a constant such that this problem cannot be
approximated with a factor of , unless . % This improves over the
inapproximability that was previously known. Our hardness result holds
even if , in which case there exists a \bar c\,^{d}-approximation
algorithm that relies on recent sampling techniques, where is again a
constant. We show that similar results hold for the problem of finding the
largest absolute value of a subdeterminant of a matrix
Fast and Robust Recursive Algorithms for Separable Nonnegative Matrix Factorization
In this paper, we study the nonnegative matrix factorization problem under
the separability assumption (that is, there exists a cone spanned by a small
subset of the columns of the input nonnegative data matrix containing all
columns), which is equivalent to the hyperspectral unmixing problem under the
linear mixing model and the pure-pixel assumption. We present a family of fast
recursive algorithms, and prove they are robust under any small perturbations
of the input data matrix. This family generalizes several existing
hyperspectral unmixing algorithms and hence provides for the first time a
theoretical justification of their better practical performance.Comment: 30 pages, 2 figures, 7 tables. Main change: Improvement of the bound
of the main theorem (Th. 3), replacing r with sqrt(r
Flexible Modeling of Diversity with Strongly Log-Concave Distributions
Strongly log-concave (SLC) distributions are a rich class of discrete
probability distributions over subsets of some ground set. They are strictly
more general than strongly Rayleigh (SR) distributions such as the well-known
determinantal point process. While SR distributions offer elegant models of
diversity, they lack an easy control over how they express diversity. We
propose SLC as the right extension of SR that enables easier, more intuitive
control over diversity, illustrating this via examples of practical importance.
We develop two fundamental tools needed to apply SLC distributions to learning
and inference: sampling and mode finding. For sampling we develop an MCMC
sampler and give theoretical mixing time bounds. For mode finding, we establish
a weak log-submodularity property for SLC functions and derive optimization
guarantees for a distorted greedy algorithm