58 research outputs found
Coresets for Wasserstein Distributionally Robust Optimization Problems
Wasserstein distributionally robust optimization (\textsf{WDRO}) is a popular
model to enhance the robustness of machine learning with ambiguous data.
However, the complexity of \textsf{WDRO} can be prohibitive in practice since
solving its ``minimax'' formulation requires a great amount of computation.
Recently, several fast \textsf{WDRO} training algorithms for some specific
machine learning tasks (e.g., logistic regression) have been developed.
However, the research on designing efficient algorithms for general large-scale
\textsf{WDRO}s is still quite limited, to the best of our knowledge.
\textit{Coreset} is an important tool for compressing large dataset, and thus
it has been widely applied to reduce the computational complexities for many
optimization problems. In this paper, we introduce a unified framework to
construct the -coreset for the general \textsf{WDRO} problems. Though
it is challenging to obtain a conventional coreset for \textsf{WDRO} due to the
uncertainty issue of ambiguous data, we show that we can compute a ``dual
coreset'' by using the strong duality property of \textsf{WDRO}. Also, the
error introduced by the dual coreset can be theoretically guaranteed for the
original \textsf{WDRO} objective. To construct the dual coreset, we propose a
novel grid sampling approach that is particularly suitable for the dual
formulation of \textsf{WDRO}. Finally, we implement our coreset approach and
illustrate its effectiveness for several \textsf{WDRO} problems in the
experiments
Large Scale Clustering with Variational EM for Gaussian Mixture Models
How can we efficiently find large numbers of clusters in large data sets with
high-dimensional data points? Our aim is to explore the current efficiency and
large-scale limits in fitting a parametric model for clustering to data
distributions. To do so, we combine recent lines of research which have
previously focused on separate specific methods for complexity reduction. We
first show theoretically how the clustering objective of variational EM (which
reduces complexity for many clusters) can be combined with coreset objectives
(which reduce complexity for many data points). Secondly, we realize a concrete
highly efficient iterative procedure which combines and translates the
theoretical complexity gains of truncated variational EM and coresets into a
practical algorithm. For very large scales, the high efficiency of parameter
updates then requires (A) highly efficient coreset construction and (B) highly
efficient initialization procedures (seeding) in order to avoid computational
bottlenecks. Fortunately very efficient coreset construction has become
available in the form of light-weight coresets, and very efficient
initialization has become available in the form of AFK-MC seeding. The
resulting algorithm features balanced computational costs across all
constituting components. In applications to standard large-scale benchmarks for
clustering, we investigate the algorithm's efficiency/quality trade-off.
Compared to the best recent approaches, we observe speedups of up to one order
of magnitude, and up to two orders of magnitude compared to the -means++
baseline. To demonstrate that the observed efficiency enables previously
considered unfeasible applications, we cluster the entire and unscaled 80 Mio.
Tiny Images dataset into up to 32,000 clusters. To the knowledge of the
authors, this represents the largest scale fit of a parametric data model for
clustering reported so far
- …