32 research outputs found

    Recovery Guarantees for Quadratic Tensors with Limited Observations

    Full text link
    We consider the tensor completion problem of predicting the missing entries of a tensor. The commonly used CP model has a triple product form, but an alternate family of quadratic models which are the sum of pairwise products instead of a triple product have emerged from applications such as recommendation systems. Non-convex methods are the method of choice for learning quadratic models, and this work examines their sample complexity and error guarantee. Our main result is that with the number of samples being only linear in the dimension, all local minima of the mean squared error objective are global minima and recover the original tensor accurately. The techniques lead to simple proofs showing that convex relaxation can recover quadratic tensors provided with linear number of samples. We substantiate our theoretical results with experiments on synthetic and real-world data, showing that quadratic models have better performance than CP models in scenarios where there are limited amount of observations available

    Tensor Sandwich: Tensor Completion for Low CP-Rank Tensors via Adaptive Random Sampling

    Full text link
    We propose an adaptive and provably accurate tensor completion approach based on combining matrix completion techniques (see, e.g., arXiv:0805.4471, arXiv:1407.3619, arXiv:1306.2979) for a small number of slices with a modified noise robust version of Jennrich's algorithm. In the simplest case, this leads to a sampling strategy that more densely samples two outer slices (the bread), and then more sparsely samples additional inner slices (the bbq-braised tofu) for the final completion. Under mild assumptions on the factor matrices, the proposed algorithm completes an n×n×nn \times n \times n tensor with CP-rank rr with high probability while using at most O(nrlog2r)\mathcal{O}(nr\log^2 r) adaptively chosen samples. Empirical experiments further verify that the proposed approach works well in practice, including as a low-rank approximation method in the presence of additive noise.Comment: 6 pages, 5 figures. Sampling Theory and Applications Conference 202

    New Dependencies of Hierarchies in Polynomial Optimization

    Full text link
    We compare four key hierarchies for solving Constrained Polynomial Optimization Problems (CPOP): Sum of Squares (SOS), Sum of Diagonally Dominant Polynomials (SDSOS), Sum of Nonnegative Circuits (SONC), and the Sherali Adams (SA) hierarchies. We prove a collection of dependencies among these hierarchies both for general CPOPs and for optimization problems on the Boolean hypercube. Key results include for the general case that the SONC and SOS hierarchy are polynomially incomparable, while SDSOS is contained in SONC. A direct consequence is the non-existence of a Putinar-like Positivstellensatz for SDSOS. On the Boolean hypercube, we show as a main result that Schm\"udgen-like versions of the hierarchies SDSOS*, SONC*, and SA* are polynomially equivalent. Moreover, we show that SA* is contained in any Schm\"udgen-like hierarchy that provides a O(n) degree bound.Comment: 26 pages, 4 figure

    Iterative Collaborative Filtering for Sparse Noisy Tensor Estimation

    Full text link
    Consider the task of tensor estimation, i.e. estimating a low-rank 3-order n×n×nn \times n \times n tensor from noisy observations of randomly chosen entries in the sparse regime. We introduce a generalization of the collaborative filtering algorithm for sparse tensor estimation and argue that it achieves sample complexity that nearly matches the conjectured computationally efficient lower bound on the sample complexity. Our algorithm uses the matrix obtained from the flattened tensor to compute similarity, and estimates the tensor entries using a nearest neighbor estimator. We prove that the algorithm recovers the tensor with maximum entry-wise error and mean-squared-error (MSE) decaying to 00 as long as each entry is observed independently with probability p=Ω(n3/2+κ)p = \Omega(n^{-3/2 + \kappa}) for any arbitrarily small κ>0\kappa> 0. Our analysis sheds insight into the conjectured sample complexity lower bound, showing that it matches the connectivity threshold of the graph used by our algorithm for estimating similarity between coordinates

    Spectral Methods from Tensor Networks

    Full text link
    A tensor network is a diagram that specifies a way to "multiply" a collection of tensors together to produce another tensor (or matrix). Many existing algorithms for tensor problems (such as tensor decomposition and tensor PCA), although they are not presented this way, can be viewed as spectral methods on matrices built from simple tensor networks. In this work we leverage the full power of this abstraction to design new algorithms for certain continuous tensor decomposition problems. An important and challenging family of tensor problems comes from orbit recovery, a class of inference problems involving group actions (inspired by applications such as cryo-electron microscopy). Orbit recovery problems over finite groups can often be solved via standard tensor methods. However, for infinite groups, no general algorithms are known. We give a new spectral algorithm based on tensor networks for one such problem: continuous multi-reference alignment over the infinite group SO(2). Our algorithm extends to the more general heterogeneous case.Comment: 30 pages, 8 figure
    corecore