354,132 research outputs found

    Poisson Matrix Completion

    Full text link
    We extend the theory of matrix completion to the case where we make Poisson observations for a subset of entries of a low-rank matrix. We consider the (now) usual matrix recovery formulation through maximum likelihood with proper constraints on the matrix MM, and establish theoretical upper and lower bounds on the recovery error. Our bounds are nearly optimal up to a factor on the order of O(log⁑(d1d2))\mathcal{O}(\log(d_1 d_2)). These bounds are obtained by adapting the arguments used for one-bit matrix completion \cite{davenport20121} (although these two problems are different in nature) and the adaptation requires new techniques exploiting properties of the Poisson likelihood function and tackling the difficulties posed by the locally sub-Gaussian characteristic of the Poisson distribution. Our results highlight a few important distinctions of Poisson matrix completion compared to the prior work in matrix completion including having to impose a minimum signal-to-noise requirement on each observed entry. We also develop an efficient iterative algorithm and demonstrate its good performance in recovering solar flare images.Comment: Submitted to IEEE for publicatio

    Matrix Completion in Colocated MIMO Radar: Recoverability, Bounds & Theoretical Guarantees

    Full text link
    It was recently shown that low rank matrix completion theory can be employed for designing new sampling schemes in the context of MIMO radars, which can lead to the reduction of the high volume of data typically required for accurate target detection and estimation. Employing random samplers at each reception antenna, a partially observed version of the received data matrix is formulated at the fusion center, which, under certain conditions, can be recovered using convex optimization. This paper presents the theoretical analysis regarding the performance of matrix completion in colocated MIMO radar systems, exploiting the particular structure of the data matrix. Both Uniform Linear Arrays (ULAs) and arbitrary 2-dimensional arrays are considered for transmission and reception. Especially for the ULA case, under some mild assumptions on the directions of arrival of the targets, it is explicitly shown that the coherence of the data matrix is both asymptotically and approximately optimal with respect to the number of antennas of the arrays involved and further, the data matrix is recoverable using a subset of its entries with minimal cardinality. Sufficient conditions guaranteeing low matrix coherence and consequently satisfactory matrix completion performance are also presented, including the arbitrary 2-dimensional array case.Comment: 19 pages, 7 figures, under review in Transactions on Signal Processing (2013

    Instantons, Twistors, and Emergent Gravity

    Full text link
    Motivated by potential applications to holography on space-times of positive curvature, and by the successful twistor description of scattering amplitudes, we propose a new dual matrix formulation of N = 4 gauge theory on S(4). The matrix model is defined by taking the low energy limit of a holomorphic Chern-Simons theory on CP(3|4), in the presence of a large instanton flux. The theory comes with a choice of S(4) radius L and a parameter N controlling the overall size of the matrices. The flat space variant of the 4D effective theory arises by taking the large N scaling limit of the matrix model, with l_pl^2 ~ L^2 / N held fixed. Its massless spectrum contains both spin one and spin two excitations, which we identify with gluons and gravitons. As shown in the companion paper, the matrix model correlation functions of both these excitations correctly reproduce the corresponding MHV scattering amplitudes. We present evidence that the scaling limit defines a gravitational theory with a finite Planck length. In particular we find that in the l_pl -> 0 limit, the matrix model makes contact with the CSW rules for amplitudes of pure gauge theory, which are uncontaminated by conformal supergravity. We also propose a UV completion for the system by embedding the matrix model in the physical superstring.Comment: v2: 64 pages, 3 figures, references added, typos correcte

    Ward identities and combinatorics of rainbow tensor models

    Full text link
    We discuss the notion of renormalization group (RG) completion of non-Gaussian Lagrangians and its treatment within the framework of Bogoliubov-Zimmermann theory in application to the matrix and tensor models. With the example of the simplest non-trivial RGB tensor theory (Aristotelian rainbow), we introduce a few methods, which allow one to connect calculations in the tensor models to those in the matrix models. As a byproduct, we obtain some new factorization formulas and sum rules for the Gaussian correlators in the Hermitian and complex matrix theories, square and rectangular. These sum rules describe correlators as solutions to finite linear systems, which are much simpler than the bilinear Hirota equations and the infinite Virasoro recursion. Search for such relations can be a way to solving the tensor models, where an explicit integrability is still obscure.Comment: 48 page

    On Low-rank Trace Regression under General Sampling Distribution

    Full text link
    A growing number of modern statistical learning problems involve estimating a large number of parameters from a (smaller) number of noisy observations. In a subset of these problems (matrix completion, matrix compressed sensing, and multi-task learning) the unknown parameters form a high-dimensional matrix B*, and two popular approaches for the estimation are convex relaxation of rank-penalized regression or non-convex optimization. It is also known that these estimators satisfy near optimal error bounds under assumptions on rank, coherence, or spikiness of the unknown matrix. In this paper, we introduce a unifying technique for analyzing all of these problems via both estimators that leads to short proofs for the existing results as well as new results. Specifically, first we introduce a general notion of spikiness for B* and consider a general family of estimators and prove non-asymptotic error bounds for the their estimation error. Our approach relies on a generic recipe to prove restricted strong convexity for the sampling operator of the trace regression. Second, and most notably, we prove similar error bounds when the regularization parameter is chosen via K-fold cross-validation. This result is significant in that existing theory on cross-validated estimators do not apply to our setting since our estimators are not known to satisfy their required notion of stability. Third, we study applications of our general results to four subproblems of (1) matrix completion, (2) multi-task learning, (3) compressed sensing with Gaussian ensembles, and (4) compressed sensing with factored measurements. For (1), (3), and (4) we recover matching error bounds as those found in the literature, and for (2) we obtain (to the best of our knowledge) the first such error bound. We also demonstrate how our frameworks applies to the exact recovery problem in (3) and (4).Comment: 32 pages, 1 figur
    • …
    corecore