2,132 research outputs found

    Sparse Multivariate Factor Regression

    Full text link
    We consider the problem of multivariate regression in a setting where the relevant predictors could be shared among different responses. We propose an algorithm which decomposes the coefficient matrix into the product of a long matrix and a wide matrix, with an elastic net penalty on the former and an 1\ell_1 penalty on the latter. The first matrix linearly transforms the predictors to a set of latent factors, and the second one regresses the responses on these factors. Our algorithm simultaneously performs dimension reduction and coefficient estimation and automatically estimates the number of latent factors from the data. Our formulation results in a non-convex optimization problem, which despite its flexibility to impose effective low-dimensional structure, is difficult, or even impossible, to solve exactly in a reasonable time. We specify an optimization algorithm based on alternating minimization with three different sets of updates to solve this non-convex problem and provide theoretical results on its convergence and optimality. Finally, we demonstrate the effectiveness of our algorithm via experiments on simulated and real data

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images

    Lossless Linear Analog Compression

    Get PDF
    We establish the fundamental limits of lossless linear analog compression by considering the recovery of random vectors xRm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m from the noiseless linear measurements y=Ax{\boldsymbol{\mathsf{y}}}=\boldsymbol{A}{\boldsymbol{\mathsf{x}}} with measurement matrix ARn×m\boldsymbol{A}\in{\mathbb R}^{n\times m}. Specifically, for a random vector xRm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m of arbitrary distribution we show that x{\boldsymbol{\mathsf{x}}} can be recovered with zero error probability from n>infdimMB(U)n>\inf\underline{\operatorname{dim}}_\mathrm{MB}(U) linear measurements, where dimMB()\underline{\operatorname{dim}}_\mathrm{MB}(\cdot) denotes the lower modified Minkowski dimension and the infimum is over all sets URmU\subseteq{\mathbb R}^{m} with P[xU]=1\mathbb{P}[{\boldsymbol{\mathsf{x}}}\in U]=1. This achievability statement holds for Lebesgue almost all measurement matrices A\boldsymbol{A}. We then show that ss-rectifiable random vectors---a stochastic generalization of ss-sparse vectors---can be recovered with zero error probability from n>sn>s linear measurements. From classical compressed sensing theory we would expect nsn\geq s to be necessary for successful recovery of x{\boldsymbol{\mathsf{x}}}. Surprisingly, certain classes of ss-rectifiable random vectors can be recovered from fewer than ss measurements. Imposing an additional regularity condition on the distribution of ss-rectifiable random vectors x{\boldsymbol{\mathsf{x}}}, we do get the expected converse result of ss measurements being necessary. The resulting class of random vectors appears to be new and will be referred to as ss-analytic random vectors

    Blind Compressed Sensing Over a Structured Union of Subspaces

    Full text link
    This paper addresses the problem of simultaneous signal recovery and dictionary learning based on compressive measurements. Multiple signals are analyzed jointly, with multiple sensing matrices, under the assumption that the unknown signals come from a union of a small number of disjoint subspaces. This problem is important, for instance, in image inpainting applications, in which the multiple signals are constituted by (incomplete) image patches taken from the overall image. This work extends standard dictionary learning and block-sparse dictionary optimization, by considering compressive measurements, e.g., incomplete data). Previous work on blind compressed sensing is also generalized by using multiple sensing matrices and relaxing some of the restrictions on the learned dictionary. Drawing on results developed in the context of matrix completion, it is proven that both the dictionary and signals can be recovered with high probability from compressed measurements. The solution is unique up to block permutations and invertible linear transformations of the dictionary atoms. The recovery is contingent on the number of measurements per signal and the number of signals being sufficiently large; bounds are derived for these quantities. In addition, this paper presents a computationally practical algorithm that performs dictionary learning and signal recovery, and establishes conditions for its convergence to a local optimum. Experimental results for image inpainting demonstrate the capabilities of the method
    corecore