A coreset for a set of points is a small subset of weighted points that
approximately preserves important properties of the original set. Specifically,
if P is a set of points, Q is a set of queries, and f:P×Q→R is a cost function, then a set S⊆P with weights
w:P→[0,∞) is an ϵ-coreset for some parameter ϵ>0 if
∑s∈Sw(s)f(s,q) is a (1+ϵ) multiplicative approximation to
∑p∈Pf(p,q) for all q∈Q. Coresets are used to solve fundamental
problems in machine learning under various big data models of computation. Many
of the suggested coresets in the recent decade used, or could have used a
general framework for constructing coresets whose size depends quadratically on
what is known as total sensitivity t.
In this paper we improve this bound from O(t2) to O(tlogt). Thus our
results imply more space efficient solutions to a number of problems, including
projective clustering, k-line clustering, and subspace approximation.
Moreover, we generalize the notion of sensitivity sampling for sup-sampling
that supports non-multiplicative approximations, negative cost functions and
more. The main technical result is a generic reduction to the sample complexity
of learning a class of functions with bounded VC dimension. We show that
obtaining an (ν,α)-sample for this class of functions with appropriate
parameters ν and α suffices to achieve space efficient
ϵ-coresets.
Our result implies more efficient coreset constructions for a number of
interesting problems in machine learning; we show applications to
k-median/k-means, k-line clustering, j-subspace approximation, and the
integer (j,k)-projective clustering problem