24 research outputs found

    Least squares approximations of measures via geometric condition numbers

    Full text link
    For a probability measure on a real separable Hilbert space, we are interested in "volume-based" approximations of the d-dimensional least squares error of it, i.e., least squares error with respect to a best fit d-dimensional affine subspace. Such approximations are given by averaging real-valued multivariate functions which are typically scalings of squared (d+1)-volumes of (d+1)-simplices. Specifically, we show that such averages are comparable to the square of the d-dimensional least squares error of that measure, where the comparison depends on a simple quantitative geometric property of it. This result is a higher dimensional generalization of the elementary fact that the double integral of the squared distances between points is proportional to the variance of measure. We relate our work to two recent algorithms, one for clustering affine subspaces and the other for Monte-Carlo SVD based on volume sampling

    Approximation Algorithms for Bregman Co-clustering and Tensor Clustering

    Full text link
    In the past few years powerful generalizations to the Euclidean k-means problem have been made, such as Bregman clustering [7], co-clustering (i.e., simultaneous clustering of rows and columns of an input matrix) [9,18], and tensor clustering [8,34]. Like k-means, these more general problems also suffer from the NP-hardness of the associated optimization. Researchers have developed approximation algorithms of varying degrees of sophistication for k-means, k-medians, and more recently also for Bregman clustering [2]. However, there seem to be no approximation algorithms for Bregman co- and tensor clustering. In this paper we derive the first (to our knowledge) guaranteed methods for these increasingly important clustering settings. Going beyond Bregman divergences, we also prove an approximation factor for tensor clustering with arbitrary separable metrics. Through extensive experiments we evaluate the characteristics of our method, and show that it also has practical impact.Comment: 18 pages; improved metric cas

    How Much and When Do We Need Higher-order Information in Hypergraphs? A Case Study on Hyperedge Prediction

    Full text link
    Hypergraphs provide a natural way of representing group relations, whose complexity motivates an extensive array of prior work to adopt some form of abstraction and simplification of higher-order interactions. However, the following question has yet to be addressed: How much abstraction of group interactions is sufficient in solving a hypergraph task, and how different such results become across datasets? This question, if properly answered, provides a useful engineering guideline on how to trade off between complexity and accuracy of solving a downstream task. To this end, we propose a method of incrementally representing group interactions using a notion of n-projected graph whose accumulation contains information on up to n-way interactions, and quantify the accuracy of solving a task as n grows for various datasets. As a downstream task, we consider hyperedge prediction, an extension of link prediction, which is a canonical task for evaluating graph models. Through experiments on 15 real-world datasets, we draw the following messages: (a) Diminishing returns: small n is enough to achieve accuracy comparable with near-perfect approximations, (b) Troubleshooter: as the task becomes more challenging, larger n brings more benefit, and (c) Irreducibility: datasets whose pairwise interactions do not tell much about higher-order interactions lose much accuracy when reduced to pairwise abstractions
    corecore